![]() Enhanced Block Request Streaming Using Scalable Encoding
专利摘要:
IMPROVED BLOCK REQUEST STREAMING USING SCHEDULE ENCODING A block request sequencing system provides improvements in the user experience and bandwidth efficiency of such systems, typically utilizing an ingest system that generates data in a form to be served by a conventional file server (HTTP, FTP or similar), where the ingest system receives content and prepares it as files or data elements to be served by the file server. A client device can be adapted to take advantage of the ingestion process in addition to including enhancements that improve presentation independently of the ingestion process. Files or data elements are organized as blocks"' which are transmitted and decoded as a unit, and the system is configured to provide and consume scalable blocks so that the quality of the presentation increases as more blocks are downloaded. Encoding and block decoding with multiple independent scaling capability layers can be performed as well. 公开号:BR112012006377B1 申请号:R112012006377-4 申请日:2010-09-22 公开日:2021-05-18 发明作者:Michael G. Luby;Ying Chen;Thomas Stockhammer 申请人:Qualcomm Incorporated; IPC主号:
专利说明:
Cross References to Related Orders [0001] This application is a non-provisional patent application claiming benefits under the following provisional applications, each naming Michael G. Luby, et al., and each entitled "Enhanced Block-Request Streaming System": Provisional Patent Application US No. 61/244,767, filed September 22, 2009; U.S. Provisional Patent Application No. 61/257,719, filed November 3, 2009; U.S. Provisional Patent Application No. 61/258,088, filed November 4, 2009; U.S. Provisional Patent Application No. 61/285,779, filed December 11, 2009; and U.S. Provisional Patent Application No. 61/296,725, filed January 20, 2010. [0002] This application also claims benefits under U.S. Provisional Patent Application No. 61/372,399, filed August 10, 2010, naming Ying Chen, et al. and titled "HTTP Streaming Extensions". [0003] Each provisional order cited above is incorporated herein by reference for all purposes. The present description also incorporates by reference, as if set forth in its entirety therein, for all purposes, the following commonly assigned patent/applications: U.S. Patent No. 6,307,487 to Luby (hereinafter "Luby I"); U.S. Patent No. 7,068,729 to Shokrollahi, et al. (hereinafter, Shokrollahi I"); US Patent Application No. 11/423,391, filed June 9, 2006 and entitled "Forward ErrorCorrecting (FEC) Coding and Streaming" naming Luby, et al. (hereinafter "Luby II") US Patent Application No. 12/103,605, filed April 15, 2008, entitled "Dynamic Stream Interleaving and Sub-Stream Based Delivery" naming Luby, et al., (hereinafter "Luby III"); US Patent Application No. 12/705,202, filed February 12, 2010, entitled "Block Partitioning for a Data Stream" naming Pakzad, et al. (hereinafter "Pakzad"); and US Patent Application No. 12/859,161, filed 18 of August 2010, entitled "Methods and Apparatus Employing FEC Codes with Permanent Inactivation of Symbols for Encoding and Decoding Processes" naming Luby, et al. (hereinafter "Luby IV"). Field of Invention [0004] The present invention relates to improved media sequencing systems and methods, more particularly to systems and methods that are adaptive to network and storage conditions in order to optimize a sequenced media presentation and allow for a temporal and simultaneous distribution efficient sequenced media data. Fundamentals of the Invention [0005] Sequenced media distribution may become increasingly important as it becomes more common for high quality audio and video to be distributed over packet-based networks such as the Internet, cellular and wireless networks , power line networks and other types of networks. The quality with which distributed sequencing media can be presented can depend on a number of factors, including the resolution (or other attributes) of the original content, the encoding quality of the original content, the capabilities of the receiving devices to decode and present the media, the timeliness and quality of signal received at receivers, etc. In order to create a good sequencing media experience, transport and timeliness of the received signal at receivers can be especially important. Good transport can provide string fidelity received at the receiver with respect to what a sender sends, while timeliness can represent how quickly a receiver can start playing content after an initial request for that content. [0006] A media distribution system can be characterized as a system having media sources, media destinations, and channels (in time and/or space) separating sources and destinations. Typically, a source includes a transmitter with access to the media in electronically manageable form, and a receiver with an ability to electronically control the receipt of media (or an approximation thereof) and deliver it to a media consumer (eg, a user having a display device coupled in some way to the receiver, a storage device or element, another channel, etc.). [0007] While many variations are possible, in a common example, a media distribution system has one or more servers that have access to media content in electronic form, and one or more client systems or devices make requests for media to servers, and servers transport media using a transmitter as part of the server, transmitting to a receiver on the client so that the received media can be consumed by the client in some way. In a simple example, there is a server and a client, for a given request and response, but that need not be the case. [0008] Traditionally, media distribution systems can be characterized in a "download" model or "sequencing" model. The "download" model can be characterized by timing independence between media data distribution and media playback to the user device or recipient. [0009] As an example, the media is unloaded in advance enough when it is needed or will be used and when it is used, as much as needed if it is already available in the container. Distribution in the download context is often performed using a file transport protocol such as HTTP, FTP or File Distribution over Unidirectional Transport (FLUTE) and the distribution rate can be determined by a flow control protocol. and/or underlying congestion, such as TCP/IP. The operation of the flow or congestion control protocol can be independent of media playback to the user or destination device, which can occur simultaneously with the download or at some other time. [0010] The "sequencing" mode can be characterized by a tight coupling between timing of media data distribution and media playback to the user device or recipient. Distribution in this context is often accomplished using a sequencing protocol such as Real-Time Sequencing Protocol (RTSP) for control and Real-Time Transport Protocol (RTP) for media data. The distribution rate can be determined by a sequencing server, often matching the data replay rate. [0011] Some disadvantages of the "download" model may be that, due to timing independence of distribution and playback, other media data may not be available when scarce, and consuming valuable network resources for distribution that may be wasted if the content is eventually not reproduced or otherwise used. [0012] An advantage of the "download" model may be that the technology needed to perform such downloads, eg HTTP, is very mature, widely developed and applicable across a wide range of applications. Download servers and solutions for the massive scalability of such file downloads (eg HTTP Network Servers and Content Delivery Networks) can be readily available, making the development of services based on this technology simple and inexpensive. [0013] Some disadvantages of the "sequencing" model may be that generally the media data distribution rate is not adapted to the available bandwidth in the connection between server and client and that specialized sequencing servers or more complex network architecture providing bandwidth and delay guarantees are required. Although sequencing systems exist and support varying the distribution data rate according to available bandwidth (eg Adaptive Sequencing Adobe Flash), these are generally inefficient as download transport flow control protocols such as like TCP in utilizing all available bandwidth. [0014] Recently, new media distribution systems based on a combination of "sequencing" and "download" models have been developed. An example of such a model is referred to here as a "block request sequencing" model, where a media client requests media data blocks from the server infrastructure using a download protocol such as HTTP. A concern with such systems may be the ability to start playing a sequence, for example, decoding and creating incoming audio and video sequences using a personal computer and displaying the video on a computer screen and playing the audio through built-in speakers, or as another example, decoding and creating incoming audio and video streams using a set-top box and displaying the video on a television monitor device and playing the audio through a stereo system. [0015] Other concerns, such as being able to decode source blocks fast enough to keep up with the source sequencing rate, to minimize decoding latency, and to reduce the use of available CPU resources are issues. Another concern is whether to provide a robust and scalable sequencing delivery solution that allows system components to fail without adversely affecting the quality of sequences delivered to receivers. Other problems can occur based on the rapid change of information about a presentation as it is being distributed. Thus, it is desirable to have improved processes and apparatus. Brief Summary of the Invention [0016] A block request sequencing system provides improvements in the user experience and bandwidth efficiency of such systems, typically using an ingest system that generates data in a form to be served by a conventional file server (HTTP , FTP, or similar), where the ingest system collects content and prepares it as files or data elements to be served by the file server, which may or may not include a temporary storage. A client device can be adapted to take advantage of the ingestion process and include enhancements that create a better presentation regardless of the ingestion process. Files or data elements are organized as blocks that are transmitted and decoded as a unit, and the system is configured to provide and consume scalable blocks so that the quality of the presentation increases as more of the block is downloaded. In some embodiments, new enhancements to block encoding and decoding methods with multiple layers of independent scaling capability are provided. [0017] The following detailed description together with the attached drawings will provide a better understanding of the nature and advantages of the present invention. Brief Description of Drawings [0018] Figure 1 shows elements of a block request sequencing system according to the embodiments of the present invention; [0019] Figure 2 illustrates a block request sequencing system of Figure 1, illustrating more details on the elements of a client system that is coupled to a server block infrastructure ("BSI") to receive data that is processed by a content ingestion system; [0020] Figure 3 illustrates a hardware/software implementation of an ingestion system; [0021] Figure 4 illustrates a hardware/software implementation of a client system; [0022] Figure 5 illustrates possible structures of the content store illustrated in Figure 1, including segments and media presentation descriptor ("MPD") files and a break of the segments, timing and other structure within an MPD file; [0023] Figure 6 illustrates details of a typical source segment, as it can be stored in the content store illustrated in Figures 1 and 5; [0024] Figures 7a and 7b illustrate simple and hierarchical indexing within the files; [0025] Figure 8a illustrates a variable block sizing with search points aligned across a plurality of versions of a media sequence; [0026] Figure 8b illustrates variable block sizing with non-aligned search points across a plurality of versions of a media sequence; [0027] Figure 9a illustrates a Metadata Table; [0028] Figure 9b illustrates the transmission of Block Table and Metadata from the server to the client; [0029] Figure 10 illustrates blocks that are independent of RAP boundaries; [0030] Figure 11 illustrates the continuous and discontinuous timing through segments; [0031] Figure 12 is a figure illustrating an aspect of scalable blocks; [0032] Figure 13 illustrates a graphical representation of the evolution of certain variables within a block request sequencing system over time; [0033] Figure 14 presents another graphical representation of the evolution of certain variables within a block request sequencing system with time; [0034] Figure 15 presents a state cell grid as a function of threshold values; [0035] Figure 16 is a flowchart of a process that can be performed at a receiver that can request single blocks and multiple blocks per request; [0036] Figure 17 is a flowchart of a flexible sequencing process; [0037] Figure 18 illustrates an example of a candidate set of requests, their priorities, and which connections they can be issued, at a given time; [0038] Figure 19 illustrates an example of a candidate set of requests, their priorities, and which connections they can be issued, that has evolved from one moment to another; [0039] Fig. 20 is a flowchart of staging server proxy selection based on a file handle; [0040] Figure 21 illustrates a syntax definition for a suitable expression language; [0041] Figure 22 illustrates an example of a suitable hash function; [0042] Figure 23 illustrates examples of file identifier construction rules; [0043] Figures 24(a) to 24(c) illustrate the bandwidth fluctuations of TCP connections; [0044] Figure 25 illustrates multiple HTTP requests for source and repair data; [0045] Fig. 26 illustrates an illustrative channel zapping time with and without FEC; [0046] Figure 27 illustrates details of a repair segment generator that, as part of the ingestion system illustrated in Figure 1, generates repair segments from the source segments and control parameters; [0047] Figure 28 illustrates the relationships between the source blocks and the repair blocks; [0048] Figure 29 illustrates a procedure for live services at different times on the client. [0049] In the figures, similar items are referred to with similar references and subscripts are provided in parentheses to indicate multiple instances of similar or identical items. Unless otherwise indicated, the final subindex (eg, "N", or "M") must not be limiting to any particular value and the number of cases for one item may differ from the number of cases for another item even when the same number is illustrated and the subindex is reused. Detailed Description of the Invention [0050] As described here, an objective of a sequencing system is to move media from its storage location (or the location where it was generated) to a location where it is being consumed, that is, presented to a user or otherwise "used" by a human or electronic consumer. Ideally, the sequencing system can provide uninterrupted playback (or more generally, uninterrupted "consumption") at a receiving end and can start playing a sequence or collection of sequences soon after a user has requested the sequence or sequences. For reasons of efficiency, it is also desirable that each sequence be interrupted once the user indicates that the sequence is no longer needed, such as when the user is changing from one sequence to another sequence or obeys the presentation of a sequence, for example , the "subtitle" string. If the media component, such as video, continues to be presented, but a different sequence is selected to present that media component, it is often preferable to fill the limited bandwidth with new sequence and break the old sequence. [0051] A block request sequencing system according to the modalities described here provides many benefits. It should be understood that a viable system need not include all of the features described here, as some applications can provide an adequately satisfying experience with less than all of the features described here. HTTP sequencing [0052] HTTP sequencing is a specific type of sequencing. With HTTP sequencing, sources can be standard network servers and content delivery networks (CDNs) and can use standard HTTP. This technique can involve string segmentation and the use of multiple strings, all within the context of standardized HTTP requests. Media, such as video, can be encoded at multiple bit rates to form different versions, or representations. The terms "version" and "representation" are used interchangeably throughout this document. Each version or representation can be broken down into smaller pieces, perhaps on the order of a few seconds each, to form segments. Each segment can then be stored on a network server or CDN as a separate file. [0053] On the client side, requests can then be made, using HTTP, to individual threads that are joined seamlessly by the client. Customer can switch to different data rates based on available bandwidth. The customer can also request multiple representations, each featuring a different media component, and can present the media in those representations together and synchronously. Triggers for switching can include network measurements and staging occupancy, for example, When operating in steady state, the client can measure requests to the server in order to maintain a target staging storage occupancy. [0054] The advantages of HTTP sequencing can include bit rate adaptation, fast initialization and seek, and unnecessary minimal distribution. These advantages come from controlling the distribution so that it is just a little ahead of playback, making maximum use of available bandwidth (via variable bit rate media), and optimizing sequence segmentation and intelligent client procedures . [0055] A media presentation description can be provided to an HTTP sequencing client so that the client can use a collection of files (for example, in formats specified by 3GPP, here called 3GP segments) to provide a streaming service. sequencing for the user. A media presentation description, and possibly updates to that media presentation description, describe a media presentation that is a structured collection of segments, each containing media components so that the client can present the included media synchronously and can provide advanced features such as search, switching bitrates, and joint presentation of media components in different representations. The customer may use the media presentation description information in different ways to provide the service. In particular, from the media presentation description, the HTTP sequencing client can determine which segments in the collection can be accessed so that the data is useful to the client and user capability within the sequencing service. [0056] In some modalities, the description of media presentation can be static, although the segments can be created dynamically. The media presentation description can be as compact as possible to minimize access and download time for the service. Other dedicated server connectivity can be minimized, for example regular or frequent timing synchronization between client and server. [0057] The media presentation can be built to allow access by terminals with different capabilities, such as access to different types of access network, different current network conditions, display sizes, access bit rates and codec support. The client can then extract the appropriate information to provide the sequencing service to the user. [0058] The media presentation description can also allow for development flexibility and compactness as per requirements. [0059] In a simpler case, each Alternative Representation may be stored in a single 3GP file, that is, a conforming file as defined in 3GPP TS 26.244, or any other file that conforms to the media file format ISO base as defined in ISO/IEC 14496-12 or derived specifications (such as the 3GP file format described in 3GPP Technical Specification 26.244). In the remainder of this document, when referring to a 3GP file, it should be understood that ISO/IEC 14496-12 and derived specifications can map all characteristics described to the more general ISO base media file format as defined in ISO/ IEC 14496-12 or any derived specifications. The client can then request an initial part of the file to learn about the media metadata (which is typically stored in the Movie decoder box, also referred to as the "moov box") along with movie fragment moments and byte offsets. The client can then issue HTTP partial get requests to get movie fragments as needed. [0060] In some modalities it may be desirable to divide each representation into several segments. In case the segment format is based on the 3GP file format, then the segments contain non-overlapping time slices of the movie fragments, called "time-sense division". Each of these segments can contain multiple movie fragments and each can be a valid 3GP file in its own right. In another modality the representation is divided into an initial segment containing the metadata (typically a Movie Header "moov" box) and a set of media segments, each containing media data and the initial segment concatenation and any segment of media forms a valid 3GP file in addition to the initial segment concatenation, and all media segments of a representation form a valid 3GP file. The entire presentation can be formed by playing each segment in turn, mapping the local timestamps within the file to the global presentation time according to the start time of each representation. [0061] It should be noted that throughout this description references to a "segment" are to be understood to include any data object that is wholly or partially constructed or read from a storage medium or otherwise obtained as a result of a file download protocol request, including, for example, an HTTP request. For example in the case of HTTP, data objects can be stored in real files residing on a disk or other storage medium connected to or forming part of an HTTP server, or data objects can be constructed by a CGI script, or another program that runs dynamically, which runs in response to the HTTP request. The terms "file" and "segment" are used interchangeably in this document unless otherwise specified. In the case of HTTP, the segment can be thought of as the entity body of a response to the HTTP request. [0062] The terms "presentation" and "content item" are used interchangeably in this document. In many examples, the presentation is an audio, video, or other media presentation that has a defined "playback" time, but other variations are possible. [0063] The terms "block" and "fragment" are used synonymously in this document unless otherwise specified and generally refer to the smallest aggregate of data that is indexed. Based on available indexing, a client may request different parts of a fragment in different HTTP requests, or it may request one or more consecutive fragments or parts of fragments in one HTTP request. In the case where segment-based ISO-based media file format or segment-based 3GP file format are used, a fragment typically refers to a movie fragment defined as the combination of a fragment header box. film ("moof") and a media data box ("mdat"). [0064] Here, a network carrying data is considered packet-based in order to simplify the description, with the recognition that, after reading this description, those skilled in the art can apply the embodiments of the present invention described here to other types of networks such as continuous bitstream networks. [0065] Here, FEC codes are considered to provide protection against long and variable data distribution times in order to simplify the descriptions presented here, with the recognition that, after reading this description, those skilled in the art can apply modalities of the present invention to other types of data transmission problems, such as bit-flip corruption of data. For example, without FEC, if the last part of a requested fragment arrives much later or has a high variation in its arrival time with respect to the previous parts of the fragment, then the content zapping time can be large and variable, whereas By using FEC and parallel requests, only most of the data requested for a fragment needs to arrive before it can be retrieved, thus reducing content zapping time and content zapping time variation. In this description, it can be assumed that the data to be encoded (ie, source data) has been divided into "symbols" of equal length, which can be any length (up to a single bit), but symbols can have different lengths for different parts. of data, for example, different symbol sizes can be used for different blocks of data. [0066] In this description, in order to simplify the descriptions presented here, it is considered that the FEC is applied to a "block" or "fragment" of data at a time, that is, a "block" is a "source block" for FEC encoding and decoding purposes. A client device can use the segment indexing described here to help determine the source block structure of a segment. Those skilled in the art may apply embodiments of the present invention to other types of source block structures, for example, a source block may be a part of a fragment, or encompass one or more fragments or parts of fragments. [0067] The FEC codes considered for use with block request sequencing are typically systematic FEC codes, that is, the source block source symbols can be included as part of the source block encoding and thus the source symbols are transmitted. As those skilled in the art will recognize, the modalities described here apply equally well to FEC codes that are not systematic. A systematic FEC encoder generates, from a source block of source symbols, some number of repair symbols and the combination of at least some of the source and repair symbols are coded symbols that are sent over the channel representing the source block. Some FEC codes can be useful to efficiently generate as many repair symbols as needed, such as "additional information codes" or "source codes" and examples of these codes include "chain reaction codes" and "in-reaction codes" multistage chain". Other FEC codes such as Reed-Solomon codes can practically only generate a limited number of repair symbols for each source block. [0068] It is considered in many of these examples that a client is coupled to a media server or a plurality of media servers and the client requests media sequencing through a channel or a plurality of channels from the media server or plurality of media servers. However, more involved arrangements are also possible. Examples of Benefits [0069] With block request sequencing, the media client maintains a coupling between the timing of these block requests and the timing of media playback for the user. This model can retain the advantages of the "download" model described above, while avoiding some of the disadvantages that arise from normal decoupling of media playback from media distribution. The block request sequencing model makes use of the rate and congestion control mechanisms available in transport protocols, such as TCP, to ensure that the maximum available bandwidth is used for media data. Additionally, dividing the media presentation into blocks allows each block of encoded media data to be selected from a set of multiple encodings available. [0070] This selection can be based on various criteria, including matching media data rate with available bandwidth, even when available bandwidth is changing over time, matching media resolution or decoding complexity for client capabilities or configuration, or matching user preferences such as languages. Selection may also include downloading and showing auxiliary components such as accessibility components, closed captioning, subtitles, sign language video, etc. Examples of existing systems using the block request sequencing model include Move Networks™, Microsoft Smooth Streaming, and Apple iPhone™ Streaming Protocol. [0071] Commonly, each block of media data can be stored on a server as an individual file and then a protocol, such as HTTP, is used, in conjunction with HTTP server software running on the server, to request the file as one entrance. Typically, the client receives metadata files, which may, for example, be in Extensible Markup Language (XML) format or in playlist text format or in binary format, which describe the characteristics of the media presentation, such as the available encodings (eg, required bandwidth, resolutions, encoding parameters, media type, language), typically referred to as "representations" in this document, and the way in which encodings have been divided into blocks. For example, the metadata might include a Uniform Resource Locator (URL) for each block. The URLs themselves can provide a scheme such as being prepended to the HTTP:// string to indicate that the protocol that should be used to access the documented resource is HTTP. Another example is ftp:// to indicate that the protocol to be used is FTP. [0072] In other systems, for example, media blocks can be built "on the fly" by the server in response to a client request that indicates the part of the media presentation, in time, that is requested. For example, in the case of HTTP with the HTTP:// scheme, executing the request from this URL provides a response to the request that contains some specific data in the entity body of that response to the request. The network implementation of how to generate this request response can be quite different depending on the server implementation serving such requests. [0073] Typically, each block can be independently decoded. For example, in the case of video media, each block might start with a "search point". In some coding schemes, a search point is referred to as "Random Access Points" or "RAPs", although not all RAPs can be designated as a search point. Similarly, in other encoding schemes, a search point starts at an "Independent Data Update" or "IDR" frame in the case of H.264 video encoding, although not all IDRs can be assigned as a search point. A search point is a position in the video (or other) media where a decoder can start decoding without requiring any data about previous frames or data or samples, as may be the case where a frame or sample being decoded has been encoded not independently, but as, for example, the difference between the current frame and the previous frame. [0074] A concern in such systems may be the ability to initiate playback of a sequence, for example, decoding and creating audio and video sequences received using a personal computer and displaying the video on a computer screen and playing the audio through built-in speakers, or as another example of decoding and creating incoming audio and video sequences using a set-top box and displaying the video on a television screen device and playing the audio through a stereo system. A primary concern might be to minimize the delay between when a user decides to watch new content delivered as a stream and takes the action that expresses that decision, for example, the user clicks a link within a browser window or the button. playback from a remote control device, and when content starts to be displayed on the user's screen, hereinafter referred to as "content zapping time". Each of these concerns can be addressed by elements of the improved system described here. [0075] An example of content zapping is when a user is watching a first content distributed via a first sequence and then the user decides to watch a second content distributed via a second sequence and initiates an action to start watching the second contents. The second sequence may be sent from the same or a different set of servers as the first sequence. Another example of content zapping is when a user is visiting a website and decides to start watching a first content distributed by a first sequence by clicking on a link inside the browser window. Similarly, a user may decide to start playing content not from the beginning, but from some point within the sequence. The user indicates to his client device to look for a position in time and the user can expect the selected time to be created instantly. Minimizing content zapping time is important for watching videos to allow users to have a fast, high quality content browsing experience when searching and sampling a range map of available content. [0076] Recently, it has become common practice to use FEC codes to protect sequencing media during transmission. When sent over a packet network, examples of which include Internet and wireless networks such as those standardized by groups such as 3GPP, 3GPP2 and DVB, the source stream is packaged and generated or made available, and from that In this way, packets can be used to carry the source or content stream in order to be generated or made available to receivers. [0077] In a typical FEC code application of these types of situations, an encoder can use a FEC code in creating repair packages, which are then sent in addition to the original source packages containing the source sequence. Repair packets have a property that, when source packet loss occurs, the received repair packets can be used to recover the data contained in the lost source packets. Repair packages can be used to recover the contents of lost source packages that are totally lost, but they can also be used to recover from partial package loss, fully received repair packages or even partially received repair packages. In this way, repair packages received totally or partially can be used to recover the source packages totally or partially lost. [0078] In other examples, other types of corruption may occur in the uploaded data, for example bit values may be changed, and thus repair packages may be used to correct such corruption and provide the most accurate recovery possible. of the source packages. In other examples, the source stream is not necessarily sent in discrete packets, but can instead be sent, for example, as a continuous bit stream. [0079] There are many examples of FEC codes that can be used to provide protection of a source sequence. Reed-Solomon codes are well known codes for error correction and elimination in communication systems. For erasure correction across, for example, packet data networks, a well-known efficient implementation of Reed-Solomon codes uses Cauchy or Vandermonde arrays as described in L.Rizzo, "Effective Erasure Codes for Reliable Computer Communication Protocols", Computer Communication Revies, 27(2):24-36 (April 1997) (hereinafter "Rizzo") and Bloemer, et al., "An XOR-Based Erasure-Resilient Coding Scheme", Technical Report TR-95-48, International Computer Science Institute, Berkeley, California (1995) (hereinafter "XOR-Reed-Solomon") or elsewhere. [0080] Other examples of FEC codes include LDPC codes, chain reaction codes as described in Luby I and multistage chain reaction codes as in Shokrollahi I. [0081] Examples of the FEC decoding process for Reed-Solomon code variations are described in Rizzo and XOR-Reed-Solomon. In these examples, decoding can be applied after sufficient source and repair data packets have been received. The decoding process can be computationally intensive and, depending on available CPU resources, it can take considerable time to complete, relative to the length of time covered by the media in the block. The receiver can take into account this length of time required for decoding when calculating the delay required between the start of media sequence reception and media playback. This delay due to decoding is perceived by the user as a delay between his request for a particular media sequence and the start of playback. It is, therefore, desirable to minimize this delay. [0082] In many applications, packets can be further subdivided into symbols in which the FEC process is applied. A packet can contain one or more symbols (or less than one symbol, but typically symbols are not split across packet groups unless error conditions between packet groups are known to be highly correlated). A symbol can be any size, but often the size of a symbol is almost equal to the size of the packet. Source symbols are the symbols that encode the data to be transmitted. Repair symbols are symbols generated from the source symbols, directly or indirectly that are presented in addition to the source symbols (that is, the data to be transmitted can be fully recovered if all the source symbols are available and none of the repair symbols being available). [0083] Some FEC codes can be block-based, since the encoding operations depend on the symbols that are in a block and can be independent of the symbols that are not in the block. With block-based encoding, an FEC encoder can generate repair symbols for a block from the source symbols in that block, then move to the next block and need not refer to source symbols other than for the current block being encoded. In a broadcast, a source block comprising source symbols can be represented by a coded block comprising coded symbols (which can be some source symbols, some repair symbols, or both). With the presence of repair symbols, not all source symbols are needed in every coded block. [0084] For some FEC codes, especially Reed-Solomon codes, the encoding and decoding time may become impractical as the number of encoded symbols per source block grows. Thus, in practice, there is often a practical upper limit (255 is an approximate practical limit for some applications) on the total number of encoded symbols that can be generated per source block, especially in a typical case where the encoding or decoding process is Reed -Solomon is performed by custom hardware, eg MPE-FEC processes using Reed-Solomon codes included as art of the DVB-H standard for stream protection against packet loss are implemented on specialized hardware inside a mobile phone that is limited to 255 total Reed-Solomon encoded symbols per source block. Since symbols often need to be placed in separate packet payloads, this creates a practical upper limit on the maximum length of the source block being encoded. For example, if a packet payload is limited to 1024 bytes or less and each packet carries an encoded symbol, then an encoded source block can be at most 255 kilobytes, and this also obviously has an upper limit on the source block size. itself. [0085] Other concerns, such as being able to decode source blocks fast enough to keep up with the source sequencing rate, to minimize the decoding latency introduced by FEC decoding, and to use only a small fraction of the CPU available on the device. receipts at any time during FEC decoding are addressed by the elements described here, in addition to dealing with the need to provide a robust and scalable sequencing distribution solution that allows system components to fail without adversely affecting sequence quality distributed to the receivers. [0086] A block request sequencing system needs to support changes in the structure or metadata of the presentation, for example, changes in the number of available media encodings or changes in media encoding parameters such as bit rate, resolution, ratio appearance, audio or video codecs or codec parameters changes in other metadata such as URLs associated with the content files. such changes may be necessary for various reasons including editing content from different sources such as ad or different segments of a larger presentation, modifying URLs or other parameters that become necessary as a result of changes in the server infrastructure, for example, due to configuration changes, equipment failures or recovery from equipment failures or other reasons. [0087] There are methods in which a performance can be controlled by a continuously updated playlist file. Since this file is continually updated, then at least some of the changes described above can be made within these updates. A disadvantage of a conventional method is that client devices must continually retrieve, also referred to as "search", the playlist file, putting the load on the server infrastructure and that this file cannot be temporarily stored for more than the refresh interval, making the task of the server infrastructure much more difficult. This is addressed by elements presented here so that updates of the type described above are provided without the need for continuous search by customers for metadata file. [0088] Another problem, especially in live services, typically known from broadcast distribution, is the user's lack of ability to view content that was broadcast prior to the time the user joined the program. Typically, local personal recording consumes unnecessary local storage or is not possible as the customer has not tuned the program or is prohibited by content protection rules. Network recording and time change visualization is preferred, but requires individual user connections to the server and a separate distribution protocol and infrastructure with respect to live services, resulting in duplicated infrastructure and significant server costs. This is also solved by the elements described here. System Overview [0089] An embodiment of the invention is described with reference to figure 1, which illustrates a simplified diagram of a block request sequencing system embodying the invention. [0090] In Figure 1, a block sequencing system 100 is illustrated, comprising the block server infrastructure ("BSI") 101 comprising, in turn, an ingest system 102 for ingesting the content 102, preparing that content and packaging it for service by an HTTP sequencing server 104 by storing it in a content store 110 that is accessible to both the ingest system 103 and the HTTP sequencing server 104. As illustrated, the system 100 may also include an HTTP buffer 106. In operation, a client 108, such as an HTTP sequencing client, sends requests 112 to the HTTP sequencing server 104 and receives responses 114 from the HTTP sequencing server 104 or HTTP buffer 106. In each In this case, the elements illustrated in Figure 1 can be implemented, at least in part, in software, comprising program code that runs on a processor or other parts. electronics. [0091] Content may comprise movies, audio, flat 2D video, 3D video, other types of videos, images, timed text, timed metadata, or similar. Some content may involve data that must be presented or consumed on a temporal basis, such as data for ancillary information presentation (station identification, advertisement, share quotes, Flash sequences, etc.) along with other media being played. Other hybrid presentations can be used as well and combine other media and/or go beyond mere audio and video. [0092] As illustrated in Figure 2, media blocks can be stored within a block server infrastructure 101 (1), which can be, for example, an HTTP server, a Content Delivery Network device, a proxy HTTP, a proxy or FTP server, or some other media server or system. The block server infrastructure 101(1) is connected to a network 122, which can be, for example, an Internet Protocol ("IP") network such as the Internet. A block request sequencing system client is illustrated having six functional components, i.e. a block selector 123, provided with the metadata described above and performing a function of selecting blocks or partial blocks to be requested among the plurality of available blocks indicated by the metadata, a block requester 124, which receives request instructions from block selector 123 and performs the necessary operations to send a request by specified block, parts of a block, or multiple blocks, to the infrastructure block server 101(1) via network 122 and to receive the data comprising the block in return, in addition to a block store 125, a store monitor 126, a media decoder 127 and one or more media transducers 128 that facilitate media consumption. [0093] The block data received by the block requester 124 is passed to temporary storage to the block store 125, which stores the media data. Alternatively, received block data can be stored directly in block store 125 as illustrated in Fig. 1. Media decoder 127 is provided with media data by block store 125 and performs such transformations on that data as are necessary to provide input. suitable for 128 media transducers, which create media into a form suitable for user or other consumption. Examples of media transducers include visual display devices such as those found in mobile phones, computer systems or televisions, and may also include audio creation devices such as speakers or headphones. [0094] An example of a media decoder would be a function that transforms data in the format described in the H.264 video encoding standard into analog or digital representations of the video frames, such as a YUV format pixel map with stamps presentation times for each frame or sample. [0095] Storage monitor 126 receives information regarding the contents of block store 125 and, based on this information and possibly other information, provides input to block selector 123, which is used to determine the selection of blocks to request, as described here. [0096] In the terminology used here, each block has a "play time" or "duration" that represents the amount of time it takes for the receiver to play the media included in that block at normal speed. In some cases, media playback within a block may depend on having already received data from previous blocks. In rare cases, the playing of part of the media in a block may depend on a subsequent block, in which case the playing time for the block is defined with respect to the media that can be played within the block without reference to the subsequent block, and the playing time for the subsequent block is increased by the playing time of the media within that block which can only play after having received the subsequent block. Since the inclusion of media in a block that depends on subsequent blocks is a rare case, the rest of this description assumes that the media in a block does not depend on subsequent blocks, but note that those skilled in the art will recognize that this variation can be easily added to the modalities described below. [0097] The receiver may have controls such as "pause", "forward", "rewind", etc. which may result in the block being consumed by playback at a different rate, but if the receiver can obtain and decode each consecutive sequence of blocks at an aggregated time equal to or less than its aggregated playback time excluding the last block in the sequence, then the receiver can present the media to the user without interruption. In some descriptions presented here, an articulated position in the media sequence is referred to as a particular "time" in the media, corresponding to the time that would have elapsed between the start of media playback and the time when the particular position in the video sequence is achieved. Time or position in a media sequence is a conventional concept. For example, where the video sequence comprises 24 frames per second, the first frame can be considered to have a position or time t=0.0 second and frame No. 24 can be considered to have a position or time t=10 .0 seconds. Of course, in a frame-based video sequence, the position or timing needs to be continuous, as each of the bits in the sequence from the first bit of frame 1 to before the first bit of frame 2 can all have the same time value. [0098] Adopting the above terminology, a block request sequencing system (BRSS) comprises one or more clients that make requests to one or more content servers (for example, HTTP servers, FTP servers, etc.). An ingestion system comprises one or more ingestion processors, where an ingestion processor receives content (in real time or not), processes the content for use by BRSS and stores it in storage accessible to the content servers, possibly also together with metadata generated by the ingestion processor. [0099] BRSS may also contain temporary content stores that coordinate with content servers. Content servers and content temporary stores can be conventional HTTP servers or HTTP temporary stores that receive requests for files or segments in the form of HTTP requests that include a URL, and can also include a byte range in order to request less than all the file or segment indicated by the URL. Clients can include a conventional HTTP client that requests HTTP servers and handles the responses to those requests, where the HTTP client is triggered by a new client system that formulates requests, passes them to the HTTP client, gets responses from the HTTP client and processes them (or stores, transforms, etc.) in order to provide them to a presentation apparatus for playback by a client device. Typically, the client system does not know in advance the media it will need (as the need may depend on user registration, user registration changes, etc.) so it is considered a "sequencing" system as the media is "consumed" as soon as it is received, or shortly thereafter. As a result, response delays and bandwidth constraints can cause delays in a presentation, such as causing a presentation to pause as the sequence reaches the point where the user is at presentation consumption. [00100] In order to provide a presentation that is perceived as of good quality, a number of details can be implemented in BRSS, at the client end, at the ingest end, or both. In some cases, the details that are implemented are carried out considering that, and to deal with the interface between client and server on the network. In some modalities, both the client system and the intake system are aware of the improvements, whereas in other modalities, only one side is aware of the improvements. In such cases, the entire system benefits from the improvements even if one side is not aware of them, while in others, the benefit is only possible if both sides are aware of it, but when one side is not aware, it still operates without failure. [00101] As illustrated in Figure 3, the ingestion system can be implemented as a combination of hardware and software components, according to several modalities. The ingestion system can comprise a set of instructions that can be executed to make the system perform any one or more of the methodologies discussed here. The system can be created as a specific machine in the form of a computer. The system can be a server computer, a personal computer (PC), or any system capable of executing a set of instructions (sequences or back) that specify the actions to be performed by that system. Additionally, while only a single system is illustrated, the term "system" should also be considered to include any collection of systems that individually or jointly execute a set (or multiple sets) of instructions to carry out any one or more of the methodologies discussed here . [00102] The ingest system may include the ingest processor 302 (e.g., a CPU), a memory 304 that can store program code during execution, and the disk store 306, all of which communicate with one another. the other via a bus 300. The system may additionally include a video display unit 308 (e.g., a liquid crystal display (LCD) or cathode ray tube (CRT)). The system may also include an alphanumeric input device 310 (e.g., a keyboard), and a network interface device 312 for receiving the content source and store and distributing the content store. [00103] Disk storage unit 306 may include a machine-readable medium on which one or more sets of instructions (e.g., software) embodying any one or more of the methodologies or functions described herein may be stored. Instructions may also reside, completely or at least partially, within memory 304 and/or within ingest processor 302 during execution thereof by the system, with memory 304 and ingest processor 302 also constituting machine readable media. [00104] As illustrated in Figure 4, the client system can be implemented as a combination of hardware and software components, according to several modalities. The client system can comprise a set of instructions that can be executed to make the system perform any one or more of the methodologies discussed here. The system can be realized as a specific machine in the form of a computer. The system can be a server computer, a PC, or any system capable of executing a set of instructions (sequential or otherwise) that specify the actions to be taken by the system. Additionally, while only a single system is illustrated, the term "system" should also be considered to include any collection of systems that individually or jointly execute a set (or multiple sets) of instructions to carry out any one or more of the methodologies discussed here . [00105] The client system may include the client processor 402 (e.g., a CPU), a memory 404 that can store program code during execution, and a disk store 406, all of which can communicate with one another. with each other via a bus 400. The system may additionally include a video display unit 408 (eg, a liquid crystal display (LCD) or cathode ray tube (CRT)). The system may also include an alphanumeric recording device 410 (eg, a keyboard), and a network interface device 412 for sending requests and receiving responses. [00106] Disk storage unit 406 may include a machine-readable medium on which one or more sets of instructions (e.g., software) embodying any one or more of the methodologies or functions described herein may be stored. Instructions may also reside, completely or at least partially, within memory 404 and/or within client processor 402 during execution thereof by the system, with memory 404 and client processor 402 also constituting machine-readable media . Using the 3GPP File Format [00107] 3GPP file format or any other file based on ISO-based media file format, such as MP4 file format or 3GPP2 file format, can be used as container format for HTTP sequencing with the following features. A segment index can be included in each segment to signal time offsets and byte ranges so that the customer can download the appropriate pieces of files or media segments as needed. The global presentation timing of all media presentation and local timing within each 3GP file or media segment can be accurately aligned. Rails within a 3GP file or media segment can be precisely aligned. Rails across representations can also be aligned by assigning each of them to the global timeline so that switching across the representation can be continuous and the joint presentation of media components in different representations can be synchronized. [00108] The file format can contain a profile for Adaptive Sequencing with the following properties. All movie data can be contained in movie fragments - the "moov" box may not contain any sample information. Audio and video sample data can be interleaved, with similar requirements regarding the progressive download profile as specified in TS26.244. The "moov" box can be located at the beginning of the file, followed by fragment offset data, also referred to as segment index, containing time offset information and byte ranges for each fragment or at least a subset of fragments in the segment of containment. [00109] It may also be possible for the Media Presentation Description to reference files that follow the existing Progressive Download profile. In that case, the customer can use the Media Presentation Description simply to select the appropriate alternative version from multiple versions available. Customers can also use HTTP partial get requests with files that conform to the Progressive Download profile to request subsets of each alternate version and thus implement a less efficient form of adaptive sequencing. In this case, different representations containing the media in the progressive download profile can still adhere to a common global timeline to allow continuous switching across representations. Advanced Methods Overview [00110] In the following sections, methods for improved block request sequencing systems are described. It should be understood that some of these enhancements can be used with or without other such enhancements, depending on the application needs. In general operation, a receiver requests from a server or other transmitter specific blocks or pieces of data blocks. Files, also called segments, can contain multiple blocks and are associated with a representation of a media presentation. [00111] Preferably, indexing information, also called "segment indexing" or "segment map" is generated and provides a mapping for play times or decoding to byte offsets of corresponding blocks or fragments within a segment. This segment indexing can be included within the segment, typically at the beginning of the segment (at least part of the segment map is at the beginning) and is often small. The segment index can also be provided in a separate index segment or file. Especially in cases where the segment index is contained in the segment, the receiver can download part or all of its segment map quickly and subsequently use this to determine the mapping between the time offsets and corresponding byte positions of fragments associated with those time deviations within the file. [00112] A receiver can use a byte offset to request data from the fragments associated with the particular time offsets, without having to download all the data associated with other fragments not associated with the time offsets of interest. In this way, segment map or segment indexing can greatly improve a receiver's ability to directly access parts of the segment that are relevant to current time offsets of interest, with benefits including improved content zapping times, ability to change quickly from one representation to another as network conditions vary, and reduced waste of network resources by offloading media that is not played by a receiver. [00113] In case switching from one representation (herein referred to as "switch from" representation) to another representation (herein referred to as "switch to" representation) is considered, the segment index can also be used to identify the initial time of a random access point in the switch representation to identify the amount of data to be requested in the switch from representation to ensure continuous switching in a sense in which the media in the switch from representation is offloaded up to a presentation time so that playback of the switch-to representation can start continuously from the random access point. [00114] These blocks represent segments of video media and other media that the requesting receiver needs in order to output to the receiver's user. The media receiver can be a client device, such as when the receiver receives content from a server that streams the content. Examples include set-top boxes, computers, game consoles, specially equipped televisions, handheld devices, specially equipped mobile phones or other customer receivers. [00115] Many advanced storage management methods are described here. For example, a storage management method allows the customer to request higher quality blocks of media that can be received in time to be played back continuously. A variable block size feature improves compression efficiency. The ability to have multiple connections to transmit blocks to the requesting device while limiting the frequency of requests provides improved transmission performance. Partially received blocks of data can be used to continue the media presentation. A connection can be reused for multiple blocks without having to compromise the connection at the beginning of a particular set of blocks. Consistency in the selection of servers among multiple possible servers by multiple clients is improved, which reduces the frequency of content duplication on nearby servers and improves the likelihood that one server will contain an entire file. Customers can request media blocks based on metadata (such as available media encodings) that are embedded in the URLs for files containing the media blocks. A system can provide calculation and minimization of the amount of storage time required before content playback can begin without incurring subsequent pauses in media playback. Available bandwidth can be shared between multiple media blocks, adjusted as each block's playtime approaches, so that, if necessary, a larger portion of the available bandwidth can be allocated in the direction of the block with the closest playing time. [00116] HTTP sequencing can employ metadata. Presentation-level metadata includes, for example, sequence length, available encodings (bit rates, codecs, spatial resolutions, frame rates, language, media types), pointers to sequence metadata for each encoding, and protection from content (digital rights management (DRM) information). Sequence metadata can be URLs to segment files. [00117] Segment metadata may include byte range X timing information for requests within a segment and identification of RAPs or other search points, where some or all of this information may be part of a segment index or segment map . [00118] The sequences can comprise multiple encodings of the same content. Each encoding can then be divided into segments where each segment corresponds to a storage unit or file. In the case of HTTP, a segment is typically a resource that can be referred to by a URL, and the request for such a URL results in the segment being returned as the entity body of the request-response message. Segments can comprise multiple groups of images (GoPs). Each GoP can further comprise multiple fragments where segment indexing provides byte/time offset information for each fragment, ie the indexing unit is a fragment. [00119] Fragments or parts of fragments can be requested over parallel TCP connections to increase throughput. This can mitigate issues that arise when sharing connections on a narrowing link or when connections are lost due to congestion, thus increasing overall speed and reliability of distribution, which can substantially improve zapping time speed and reliability. of content. Bandwidth can be traded for per-request latency, but care must be taken to avoid making requests too far into the future that could increase the risk of starvation. [00120] Multiple requests per thread on the same server can be sequenced (creating the next request before the current request completes) to avoid repetitive TCP initialization delays. Requests for continuous chunks can be aggregated into one request. [00121] Some CDNs prefer large files and can trigger background collections of an entire file from an origin server when it first sees a track request. Most CDNs will, however, serve track requests from the staging store if the data is available. It can therefore be advantageous if you have some part of the client requests for a full segment file. These requests can be later canceled if necessary. [00122] Valid switching points can be search points, specifically RAPs, for example, in the target sequence. Different implementations are possible such as fixed GoP structures or alignment of RAPs through sequences (based on media start or based on GoPs). [00123] In one modality, segments and GoPs can be aligned through different rate sequences. In this modality, GoPs can be variable in size and can contain several fragments, but the fragments are not aligned between the different taxa sequences. [00124] In some modalities, file redundancy can be used to advantage. In these modalities, an erasure code is applied to each fragment to generate redundant versions of the data. Preferably, the source formatting is not changed due to the use of FEC, and additional repair segments, for example as a dependent representation of the original representation, containing FEC repair data are generated and made available as an additional step in the ingestion system. The client, which is able to reconstruct a fragment using only source data for that fragment, can only request source data for the fragment within the segment of the servers. If the servers are unavailable or the connection to the servers is slow, which can be determined before or after the request for source data, additional repair data may be requested for the fragment from the repair segment, which decreases the time to reliably distribute enough data for fragment recovery, possibly using FEC decoding to use a combination of received source and repair data to recover the fragment source data. Additionally, additional repair data may be requested to allow fragment recovery if a fragment becomes urgent, ie its playtime becomes imminent, which increases data sharing for that fragment over a link, but it is more efficient than closing other connections on the link to free up bandwidth. This can also mitigate the risk of starvation from using parallel connections. [00125] Fragment format can be a stored sequence of RTP packets with audio and video synchronization achieved through RTCP. [00126] The segment format can also be a stored sequence of MPEG-2 TS packets with audio and video synchronization achieved by MPEG-2 TS internal timing. Using Signaling and/or Block Creation to Make Sequencing More Efficient [00127] A number of features may or may not be used in a block request sequencing system to provide improved performance. Performance can be related to the ability to play a presentation without interruption, obtain media data within bandwidth constraints, and/or all of these within limited processor resource on a client, server and/or system. ingestion. Some of these features will be described now. Indexing within Segments [00128] In order to formulate partial GET requests for Movie Fragments, the client can be informed about the byte offset and start time in decoding or rendering all media components contained in the fragments within the file or segment and also which fragments start or contain a Random Access Point (and are best suited for use as switching points between alternative representations), where this information is often referred to as segment indexing or segment map. The start time at decoding or presentation time can be expressed as deltas with respect to a reference time. [00129] This byte offset and time indexing information can display at least 8 bytes of data per movie fragment. As an example, for a two-hour movie contained within a single file, with 500 ms of movie fragments, this would total about 112 kilobytes of data. Downloading all of this data when starting a presentation can result in significant additional startup delay. However, the byte and time offset data can be encoded hierarchically so that the client can quickly find a small chunk of time and offset data relevant to the point in the presentation where they want to start. Information can also be distributed within a segment so that some refinement of the segment index can be found interspersed with the media data. [00130] Note that if a representation is segmented in the sense of time into multiple segments, the use of this hierarchical coding may not be necessary, since the total time and deviation data for each segment may already be quite small. For example, if the segments are one minute instead of two hours in the example above, the byte and time offset indexing information is about 1 kilobyte of data, which can typically fit into a single TCP/IP packet. . [00131] Different options are possible for adding fragment time and byte offset data to a 3GPP file: [00132] First, the Film Fragment Random Access Box ("MFRA") can be used for this purpose. MFRA provides a table, which can assist readers in finding random hotspots in a file using film fragments. To support this function, the MFRA contains the byte offsets of the MFRA boxes containing random access points. The MFRA can be located at or near the end of the file, but this is not necessarily the case. By scanning from the end of the file to a Film Fragment Random Access Bypass Box and using the size information found therein, one can locate the beginning of a Film Fragment Random Access Box. However, placing the MFRA at the end of the HTTP sequencing typically requires at least 3 to 4 HTTP requests to access the desired data: at least one to request the MFRA from the end of the file, one to get the MFRA and finally one to get the desired fragment in the file. Therefore, placement at the beginning may be desirable as then the MFRA can be downloaded along with the first media data in a single request. Also, using MFRA for HTTP sequencing can be inefficient, as none of the information in "MFRA" is needed other than time and moof_offset and specifying offsets instead of lengths can require more bits. [00133] Second, the Item Location Box ("ILOC") can be used. ILOC provides a directory of metadata resources in this or other files, by the location of their containment files, their offsets within that file, and their lengths. For example, a system can integrate all externally referenced metadata resources within a file, readjusting file offsets and file references accordingly. However, ILOC is intended to provide the location of the metadata so that it can be difficult for it to coexist with the actual metadata. [00134] Lastly, and perhaps more appropriately, is the specification of a new box, referred to as the Time Index Box ("TIDX"), specifically dedicated to the purpose of providing exact fragment times or durations and byte offset efficiently . This is described in more detail in the next section. An alternative box with the same functionality could be the Segment Index Box ("SIDX"). Here, unless otherwise noted, these two can be interchangeable, as both boxes provide the ability to efficiently provide exact fragment times or durations and byte offset. The difference between TIDX and SIDX is given below. It should be apparent how to interchange TIDX boxes and SIDX boxes, as both boxes implement a segment index. Segment Indexing [00135] A segment has an identified start time and an identified number of bytes. Multiple fragments can be concatenated into a single segment and clients can issue requests that identify the specific byte range within the segment that corresponds to the required fragment or fragment subset. For example, when HTTP is used as the request protocol, then the HTTP Range header can be used for this purpose. This approach requires the client to have access to a "segment index" of the segment that specifies the position within the segment of different fragments. This "segment index" can be provided as part of the metadata. This approach has the result that far fewer files need to be created and managed compared to the approach where each block is kept in a separate file. The management of the creation, transfer and storage of very large numbers of files (which can extend to many thousands for a 1 hour presentation) can be complete and can be prone to errors and thus reduction in the number of files represents an advantage. [00136] If the client only knows the desired start time of a smaller part of a segment, it can request the entire file, then read the file to determine the proper playback start location. To improve bandwidth utilization, segments can include an indicates file as metadata, where the index file maps the byte ranges of individual blocks to the time ranges to which the blocks correspond, called segment indexing or segment map. This metadata can be formatted as XML data or it can be binary, for example, following the atom and box structure of the 3GPP file format. Indexing can be simple, where the time and byte ranges for each block are absolute with respect to the beginning of the file, or it can be hierarchical, where some blocks are grouped into parent blocks (and this one into grandparent blocks, etc.) and time and byte range for the given block are expressed with respect to the byte range and/or time of the parent block of the block. Illustrative Indexing Map Structure [00137] In one embodiment, the original source data for a representation of a media sequence may be contained in one or more media files referred to herein as "media segment", where each media segment contains the media data used for play a continuous time segment of media, eg 5 minutes of media playback. [00138] Figure 6 illustrates a general illustrative structure of a media segment. Within each segment, at the beginning or spread across the source segment, there is also segmentation information, which comprises a byte and time offset segment map. The byte and time offset segment map in a modality can be a list of byte and time offset pairs (T(0), B(0)), T(1), B(1)), .. .,(T(n), B(n)), where T(i-1) represents a start time within the segment for playing media fragment i with respect to the media start time between all media segments, T (i) represents an ending time for fragment i (and thus the starting time for the next fragment), and byte offset B(i-1) is the corresponding byte index of the beginning of the data within that segment source where media fragment i starts with respect to the beginning of the source segment, and B(i) is the corresponding final byte index of fragment i (and thus the index of the first byte of the next fragment). If the segment contains multiple media components, then T(i) and B(i) can be given for each component in the segment in an absolute manner or can be expressed with respect to another media component that serves as a media component of reference. [00139] In this modality, the number of fragments in the source segment is equal to n, where n can vary from segment to segment. [00140] In another embodiment, the time offset in the segment index for each fragment can be determined with the absolute start time of the first fragment and the durations of each fragment. In this case, the segment index can document the start time of the first fragment and the duration of all the fragments that are included in the segment. The segment index can also only document a subset of fragments. In this case, the segment index documents the duration of a subsegment that is defined as one or more consecutive fragments, ending at the end of the containing segment, or at the beginning of the next subsegment. [00141] For each fragment, there can be a value that indicates whether or not the fragment starts at or contains a search point, that is, at a point where no media after that point depends on any media prior to that point and, in this way, media from that fragment onwards can be played independently of the previous fragments. Search points are generally points in the media where playback can start independently of all previous media. Figure 6 also illustrates a simple example of possible segment indexing for a source segment. In this example, the time offset value is in units of milliseconds, so the first fragment of that source segment starts 20 seconds after the beginning of the media, and the first fragment has a play time of 485 milliseconds. The byte offset from the beginning of the first fragment is equal to 0, and the byte offset from the end of the first fragment/beginning of the second fragment is 50,245, so the first fragment is 50,245 bytes in size. If the fragment or subsegment does not start with a random access point, but the random access point is contained within the fragment or subsegment, then the difference in decoding time or presentation time between the start time and the actual RAP time can be provided. This allows that in case of switching of that media segment, the client can know precisely the time until the switching of the representation needs to be presented. [00142] Additionally, or instead of simple and hierarchical indexing, daisy-chain type indexing and/or hybrid indexing can be used. [00143] Since the illustrative durations for different tracks may not be the same (for example, video samples can be displayed for 33 ms, whereas an audio sample can last 180 ms), the different tracks in a Film Fragment it may not start and end at precisely the same time, ie the audio may start a little before or a little after the video, with the opposite being true for the previous fragment to compensate. To avoid ambiguity, the timestamps specified in the byte and time offset data can be specified with respect to a particular track and this can be the same track for each representation. Normally this would be the video rail. This allows the client to accurately identify the next frame of video when switching representations. [00144] Care must be taken during the presentation to maintain a strict relationship between the track time scales and the presentation time to ensure smooth playback and maintenance of audio and video synchronization despite the above issue. [00145] Figure 7 illustrates some examples, such as a simple index 700 and a hierarchical index 702. [00146] Two specific examples of a box containing a segment map are provided below, one referred to as TIDX and one referred to as SIDX. The definition follows the box structure according to the ISO-based media file format. Other designs for such boxes to define similar syntax and with the same semantics and functionality should be apparent to the reader. Time Index Box Definition Box Type: "tidx" container: required file: no quantity: any number equal to zero or one [00147] The Time Index Box can provide a set of time and byte offset indices that associate certain regions of the file with certain time intervals of the presentation. The Time Index Box can include a target type field, which indicates the type of data referred to. For example, the Time Index Box with target type "moof" provides an index for the Media Fragments contained in the file in terms of time and byte offsets. A Time Index Box with target type of Time Index Box can be used to build a hierarchical time index, allowing file users to quickly navigate to the required part of the index. [00148] The segment index may, for example, contain the following syntax: aligned(8) class TimeIndexBox extends FullBox ('frai'){ unsigned int(32) targettype; unsigned int(32)time_reference_track_ID; unsigned int(32)number_of_elements; unsigned int(64) first_element_offset; unsigned int(64) first_element_time; for(i=1; i<= number_of_elements; i++) {bit(1) random_access_flag; unsigned int(31) length; unsigned int(32) deltaT; } } Semantics targettype: is the type of box data referenced by this Time Index Box. It can be Movie Fragment Header ("moof") or Time Index Box ("tidx"). time-reference-track_id: indicates the track against which time offset in this index is specified. number_of_elements: The number of elements indexed by this Time Index Box. first_element_offset: The byte offset from the beginning of the file of the first indexed element. first_element_time: The start time of the first indexed element, using the timescale specified in the Media Header box of the track identified by time_reference_track_id. random_access_flag: One if the element's start time is a random access point. Zero otherwise. length: The length of the indexed element in bytes. deltaT: The difference in terms of the time scale specified in the Media Header box of the track identified by time_reference_track_id between the start time of this element and the end time of the next element. Segment Index Box [00149] Sidx provides a compact index of film fragments and other Segment Index Boxes in a segment. There are two circuit structures in the Segment Index Box. The first circuit documents the first sample of the subsegment, that is, the sample in the first film fragment referred to by the second circuit. The second circuit provides an index of the subsegment. The container for the sidx box is the file or segment directly. Syntax Semantics reference_track_ID provides the track_ID for the reference track. track_count: the number of tracks indexed in the following circuit (1 or more); reference_count: the number of elements indexed by the second circuit (1 or more); track_ID: the ID of a track for which a track fragment is included in the first movie fragment identified by this index; exactly one track_ID on this circuit is equal to reference_track_ID; decoding_time: the decoding time for the first sample on the track identified by the track_ID in the movie fragment referred to by the first item on the second loop, expressed in track timescale (as documented in the timescale field of the Media Header Box of the rail); reference_type: when set to 0, indicates the reference is to a movie fragment box ("moof"), when set to 1, indicates the reference is to a segment index box ("sidx"); reference_offset: the distance in bytes from the first byte following the containing Segment Index Box, to the first byte of the referenced box; subsegment_duration: when referring to the Segment Index Box, this field carries the sum of the subsegment_duration fields in the second circuit of that box; when the reference is to a film fragment, this field carries the sum of the sample durations of the samples on the reference rail, the indicated film fragment and subsequent film fragments up to the first film fragment documented by the next record in the circuit, or the end of the subsegment, whichever is earlier, the duration is expressed in the track's timescale (as documented in the track's Media Header Box timescale field); contains_RAP: when the reference is to a movie fragment, then this bit can be 1 if the track fragment within the film fragment for the track with track_ID equal to reference_track_ID contains at least one random access point, otherwise this bit is set to 0; when the reference is to a segment index, then that bit is set to 1 only if any of the references in that segment index have that bit set to 1, and 0 otherwise; RAP_delta_time: if contain_RAP is equal to 1, it gives the presentation (composition) time of a RAP; reserved with the values 0 if contains_RAP is equal to 0. The time is expressed as the difference between the decoding time of the first sample of the subsegment documented by this record and the presentation (composition) time of the random access point, on the rail with track_ID equals reference_track_ID. Differences between TIDX and SIDX [00150] SIDX and SIDX provide the same functionality with respect to indexing. The first SIDX circuit provides in addition to the global timing for the first film fragment, but the global timing can also be contained in the film fragment itself, either absolutely or relative to the reference rail. [00151] The second SIDX circuit implements the TIDX functionality. Specifically, SIDX allows to have a mix of targets for the reference for each index referenced by reference_type, whereas TIDX only references only TIDX or only MOOF. number_of_elements in TIDX corresponds to reference_count in SIDX, time-reference_track_id in TIDX corresponds to reference-track_ID in SIDX, first_element_offset in TIDX corresponds to reference_offset in the first record of the second circuit, first_element_time in TIDX corresponds to decoding_time of the reference_track in the first circuit, random_access_ TIDX corresponds to contains_RAP in SIDX with the additional freedom that in SIDX RAPO may not necessarily be located at the beginning of the fragment, and therefore requiring RAP_delta_time, the length in TIDX corresponds to reference_offset in SIDX and finally deltaT in TIDX corresponds to the subsegment_duration in SIDX . Therefore, the functionalities of the two boxes are equivalent. Variable Block Sizing and Sub-GoP Blocks [00152] For video media, the relationship between the video encoding structure and the block structure for requests can be important. For example, if each block starts with a search point, such as RAP, and each block represents an equal period of video time, then the placement of at least some search points in the video media is fixed and search points will occur at regular intervals within the video encoding. As is well known to those skilled in the video coding technique, compression efficiency can be improved if search points are located according to the relationships between video frames, and in particular, if they are located in frames that have little in common. with the previous tables. This requirement that blocks represent equal amounts of time thus places a restriction on video encoding so that compression can be less than ideal. [00153] It is desirable to allow the position of search points within a video presentation to be chosen by the video coding system, rather than requiring the search points to be fixed positions. Allowing the video encoding system to choose search points results in improved video compression and thus a higher quality of video media that can be delivered using a given available bandwidth, resulting in an experience Improved user experience. Current block request sequencing systems may require that all blocks have the same duration (in video time), and that each block must start with a search point and this is therefore a disadvantage of existing systems. [00154] A new block request sequencing system that provides advantages over the above is now described. In one embodiment, the video encoding process of a first version of the video component can be configured to choose search point positions in order to optimize compression efficiency, but with a requirement that there be a maximum duration between points search engine. This last requirement does not restrict the choice of search points by the encoding process and thus reduces the compression efficiency. However, the reduction in compression efficiency is small compared to that incurred if regular fixed positions are required for search points, as long as the maximum duration between search points is not too small (eg greater than about of a second). Additionally, if the maximum duration between search points is a few seconds, then the reduction in compression efficiency compared to completely free placement of search points is generally very small. [00155] In many modes, including this mode, it may be that some RAPs are not search points, that is, there may be a frame that is a RAP that is between two consecutive search points that are not chosen as a search point. search, or because the amount of media data between the search point before or after the RAP and the RAP is very small. [00156] In many modes, including this mode, it may happen that some RAPs are not search points, that is, there may be a frame that is a RAP that is between two consecutive search points that are not chosen as search points, for example, because the RAP is very close in time to the surrounding search points, or because the amount of media data between the search point before or after the RAP and the RAP is very small. [00157] The position of the search points within all other versions of the media presentation may be restricted to be equal to the search points in a first (eg highest media data rate) version. This reduces the compression efficiency for this other version compared to allowing the encoder free choice of search points. [00158] The use of seek points typically requires that a frame be independently decodable, which generally results in a low compression efficiency for that frame. Frames that do not need to be independently decodable can be encoded with reference to data in other frames, which generally increases the compression efficiency for that frame by an amount that depends on the amount of coincidence between the frame to be encoded and the reference frames . The efficient choice of search point positioning preferably chooses as a search point frame a frame that has low coincidence with previous frames and thus minimizes the compression efficiency penalty incurred by encoding the frame in a way that is independently decodable. [00159] However, the level of coincidence between a frame and potential frames of reference is highly correlated through different representations of the content, since the original content is the same. As a result of this, restricting the search points in other variations to be the same position as the search points in the first variation does not make a big difference in compression efficiency. [00160] The search point structure is preferably used to determine the block structure. Preferably, each search point determines the beginning of a block, and there can be one or more blocks that span data between two consecutive search points. Since the duration between seek points is not fixed for encoding with good compression, not all blocks need to have the same playback duration. In some embodiments, blocks are aligned between content versions - that is, if there is a block spanning a specific group of frames in one version of the content, then there is a block spanning the same group of frames in another version of the content. Blocks for a particular version of content do not overlap, and each content frame is contained within exactly one block of each version. [00161] A feature that allows the efficient use of variable durations between search points, and thus variable duration GoPs, is segment indexing or segment map that can be included in a segment or provided by others means to a client, that is, it is metadata associated with that segment in this representation that can be provided comprising the start time and duration of each block of the presentation. The client can use this segment indexing data when determining the block in which to start the presentation when the user has requested that the presentation start at a particular point that is within a segment. If such metadata is not provided, then the presentation can start only at the beginning of the content, or at a random or approximate point near the desired point (eg, by choosing the starting block by dividing the requested starting point (in time) by the duration block average to provide the starting block index). [00162] In one mode, each block can be provided as a separate file. In another embodiment, multiple consecutive blocks can be aggregated into a single file to form a segment. In this second modality, metadata for each version can be provided comprising the start time and duration of each block and the byte offset within the file where the block starts. This metadata can be provided in response to an initial protocol request, that is, available separately from the segment or file; or it can be contained within the same file or segment as the blocks themselves, for example, at the beginning of the file. As will be clear to those skilled in the art, this metadata can be encoded in compressed form, such as gzip or delta encoding, or in binary form, in order to reduce the network resources needed to transport the metadata to the client. [00163] Figure 6 illustrates an example of segment indexing where blocks are of variable size, and where the block scope is a partial GoP, that is, a partial amount of media data between a RAP and the next RAP. In this example, the search points are indicated by the RAP indicator, where a RAP indicator value of 1 indicates that the block starts with or contains a RAP, or search point, and where a RAP indicator of 0 indicates that the block does not contain a RAP or search point. In this example, the first three blocks, ie bytes 9 to 157,033, comprise the first GoP, which has a presentation duration of 1,623 seconds, with a presentation time running from 20 seconds for the content to 21,623 seconds. In this example, the first of the first three blocks comprises .485 seconds of presentation time, and comprises the first 50,245 bytes of media data in the segment. In this example, blocks 4, 5 and 6 comprise the second GoP, blocks 7 and 8 comprise the third GoP, and blocks 9, 10 and 11 comprise the fourth GoP. Note that there may be other RAPs in the media data that are not designated as search points, and are therefore not flagged as RAPs, in the segment map. [00164] Referring again to Figure 6, if the client or recipient wishes to access content starting at the time offset of approximately 22 seconds in the media presentation, then the client may first use other information, such as MPD described in greater detail later , to first determine that the relevant media data is within that segment. The client can offload the first part of the segment to get the segment index, which in this case is few bytes for example, using an HTTP byte range request. Using segment indexing, the client can determine that the first block to dump is the first block with a time offset that is at most 22 seconds, and that starts with a RAP, that is, a fetch point. In this example, although block 5 has a time offset that is less than 22 seconds, that is, its time offset is 21.965 seconds, segment indexing indicates that block 5 does not start with a RAP, and from that so instead, based on segment indexing, the client selects to unload block 4, since its start time is at most 22 seconds, that is, its time offset is 21,623 seconds, and starts with a RAP. Thus, based on segment indexing, the client will make an HTTP banner request starting at byte offset 157,034. [00165] If segment indexing is not available then the client may need to download all previous 157,034 bytes of data before downloading this data, leading to much longer initialization time, or channel zapping time, and wasted downloading of data that is not useful. Alternatively, if segment indexing is not available, the client can approximate where the desired data starts within the segment, but the approximation can be poor and can lose adequate time and then demand feedback which again increases the initialization delay . [00166] Generally, each block comprises a piece of media data which, together with the previous blocks, can be played by a media player. In this way, the blocking structure and the segment indexing blocking structure signaling to the customer, contained within the segment or provided to the customer through other means, can significantly improve the customer's ability to provide channel zapping fast, and continuous playback in view of network variations and interruptions. Supporting variable length blocks, and blocks that only encompass parts of a GoP, as enabled by segment indexing, can significantly improve the sequencing experience. For example, referring again to Figure 6 and the example described above where the client wants to start playback in approximately 22 seconds into the presentation, the client can request, through one or more requests, the data within block 4, and then feed this stops the media device as soon as it is available to start playback. Thus, in this example, playback starts as soon as the 42,011 bytes of block 4 are received at the client, thus allowing for fast channel zapping time. If, instead, the client needs to request the entire GoP before playback starts, the channel zapping time will be longer, since that's 144,211 bytes of data. [00167] In other embodiments, RAPs or search points may also occur in the middle of a block, and there may be data in segment indexing that indicates where that RAP or search point is within the block or fragment. In other embodiments, the time offset may be the decoding time of the first frame within the block, rather than the presentation time of the first frame within the block. [00168] Figures 8(a) and (b) illustrate an example of variable block sizing and search point structure aligned through a plurality of versions or representations. Figure 8(a) illustrates variable block sizing with search points aligned across a plurality of versions of a media sequence, while Figure 8(b) illustrates variable block sizing with non-aligned search points across a plurality of versions of a media sequence. [00169] Time is illustrated across the top in seconds, and the blocks and search points of the two segments for the two representations are illustrated from left to right in terms of their timing with respect to that same line, and thus , the length of each illustrated block is proportional to its playing time and not proportional to the number of bytes in the block. In this example, segment indexing for both segments of the two representations would have the same time offsets for the search points, but potentially different numbers of blocks or fragments between the search points, and different byte offsets for blocks due to quantities different media data in each block. In this example, if the client wishes to switch from representation 1 to representation 2 in the presentation time approximately 23 seconds, then the client can request through block 1.2 in segment to representation 1, and start requesting the segment to representation 2 starting in block 2.2 and thus the switching will occur in the presentation coinciding with search point 1.2 in presentation 1, which is equal to the time of search point 2.2 in representation 2. [00170] As should be clear from the above, the block request sequencing system described does not restrict video coding to place search points at specific positions within the content and thus mitigates one of the problems of existing systems. [00171] In the modalities described above there is an organization so that the search points for the various representations of the same content presentation are aligned. However, in many cases it is preferable to relax this alignment requirement. For example, it is sometimes the case that coding tools have been used to generate representations that do not have the capabilities to generate a representation aligned by search points. As another example, content presentation can be encoded into different representations independently, without any search point alignment between different representations. As another example, a representation may contain more search points as it has lower rates and more commonly needs to be switched or contains search points to support trick modes such as forward, backward, or fast search. Thus, it is desirable to provide methods that make a block request sequencing system capable of efficiently and continuously dealing with non-aligned search points across the various representations for a content presentation. [00172] In this modality, the positions of the search points through the representations may not be aligned. Blocks are constructed so that a new block starts at each search point, and as such there may not be any alignment between blocks of different versions of the presentation. An example of such an unaligned search point structure between different representations is illustrated in Figure 8(b). Time is illustrated across the top in seconds, and the blocks and search points of the two segments for the two representations are illustrated from left to right in terms of their timing with respect to that timeline, and thus , the length of each illustrated block is proportional to its playing time and not proportional to the number of bytes in the block. In this example, the segment indexing for both segments of the two representations will have potentially different time offsets for the search points, and also potentially different numbers of blocks or fragments between the search points, and different byte offsets for blocks due to different amounts of media data in each block. In this example, if the client wants to switch from representation 1 to representation 2 in the presentation time of approximately 25 seconds, then the client can request up to block 1.3 in the segment to representation 1, and start requesting the segment to representation 2 starting at block 2.3, and thus switching can occur in the presentation coinciding with search point 2.3 in representation 2, which is in the middle of playing block 1.3 in representation 1, and thus some media for the block 1.2 will not have been played (although the media data for the unplayed 1.3 block 1.3 frames may have to be loaded into the receiving store to decode other 1.3 block frames that have been played). [00173] In this mode, the operation of block selector 123 can be modified so that whenever it is necessary to select a block of a representation that is different from the previously selected version, the last block whose first frame is not after the frame subsequent to the last frame of the last selected block is chosen. [00174] This last described modality can eliminate the requirement to restrict the search point positions within versions beyond the first version and thus increases the compression efficiency for these versions resulting in a higher quality presentation for a given width available bandwidth and that's an improved user experience. An additional consideration is that video encoding tools that perform the search point alignment function through multiple encodings (versions) of content may not be widely available and therefore an advantage of the latter described modality is that currently available video encoding can be used. Another advantage is that encoding different versions of content can proceed in parallel without any need for coordination between encoding processes for different versions. Another advantage is that additional versions of the content can be encoded and added to the presentation later, without having to provide the encoding tools without the lists of specific search point positions. [00175] Generally, where images are encoded as groups of images (GoPs), the first image in the sequence can be a search point, but this need not always be the case. Ideal Block Division [00176] A problem in a block request sequencing system is the interaction between the structure of encoded media, eg video media and block structure used for block requests. As is known to those skilled in the video encoding art, it is often the case that the number of bits required for the encoded representation of each video frame varies, sometimes substantially, from frame to frame. As a result, the relationship between the amount of data received and the duration of the media encoded by that data may not be straightforward. Additionally, the block partitioning of data and media within a block request sequencing system adds an additional dimension of complexity. In particular, in some systems the media data of a block may not be played back until the entire block has been received, for example, the arrangement of media data within a block or the dependencies between media samples within a block. block and the use of erasure codes may result in this case. As a result of these complex interactions between block size and block duration and the possible need to receive an entire block before starting to play it is common for client systems to adopt a conservative approach where media data is stored before playback starts. Such storage results in long channel zapping time and thus a poor user experience. [00177] Pakzad describes "block splitting methods" are new and efficient methods for determining how to split a data stream into contiguous blocks based on the underlying structure of the data stream and further describes various advantages of these methods in the context of a system of sequencing. A further embodiment of the invention for applying Pakzad block division methods to a block request sequencing system is now described. This method can comprise arranging media data to be presented in an approximate presentation time order such that the playing time of any given element of media data (eg, a video frame or audio sample) differ from any adjacent media data element by less than a given threshold. Media data ordered in this way can be considered a data string in the Pakzad language and any of the Pakzad methods applied to that data string identifies block boundaries with the data string. Data between any pair of adjacent block boundaries is considered a "block" in the language of that description and the methods of that description are applied to provide media data presentation within a block request sequencing system. As is clear to those skilled in the art after reading this description, the various advantages of the methods described in Pakzad can then be realized in the context of a block request sequencing system. [00178] As described in Pakzad, determining the block structure of a segment, including blocks comprising m partial GoPs or parts of more than one GoP, can impact the client's ability to activate channel zapping times fast. In Pakzad, methods are provided in which, according to a target startup time, a block structure and a target download rate are provided that would ensure that if the client started downloading the representation at any search point and started playing after the target startup time had elapsed then playback would continue continuously as long as at each point in time the amount of data the client downloaded was at least the target download rate X the time elapsed from the start of the download. It is advantageous for the client to have access to the target startup time and target download rate, as this provides the client with a means to determine when to start playing the representation at the earliest moment, and allows the client to continue playing the representation since the download meets the conditions described above. As such, the method described later provides a means of including the target startup time and target download rate within the Media Presentation Description so that it can be used for the purposes described above. Media Presentation Data Model [00179] Figure 5 illustrates possible structures of the content store illustrated in Figure 1, including segments and media presentation description files ("MPD"), and a segment break, timing and other structure within an MPD file. Details of possible implementations of MPD structures or files will now be described. In many instances, MPD is described as a file, but non-file structures can be used as well. [00180] As illustrated here, content store 110 maintains a plurality of source segments 510, MPDs 500 and repair segments 512. An MPD may comprise period registers 501, which, in turn, may comprise representation registers 502. which contain 503 segment information such as references to boot segments 504 and media segments 505. [00181] Figure 9(a) illustrates an illustrative metadata table 900, while Figure 9(b) illustrates an example of how an HTTP 902 sequencing client obtains metadata table 900 and media blocks 904 through a connection with an HTTP 906 sequencing server. [00182] In the methods described here, a "Media Presentation Description" is provided and comprises information regarding the representations of the media presentation that are available to the customer. The representations can be alternatives in the sense that the customer selects one of the different alternatives, or they can be complementary in the sense that the customer selects several of the representations, each possibly also from a set of alternatives, and presents them in set. Representations can advantageously be assigned to groups, with the client programmed or configured to understand that, for representations in one group, there are alternatives to the other, whereas representations from different groups are such that more than one representation must be presented together. In other words, if there is more than one representation in a group, the client chooses a representation from the group, a representation from the next group, etc., to form a presentation. [00183] The information describing the representations can advantageously include details of the applied media codecs including profiles and levels of those codecs that are needed to decode the representation, video frame rates, video resolution and data rates. The client receiving the Media Presentation Description may use this information to determine in advance whether a representation is suitable for decoding or presentation. This represents an advantage since if the differentiation information were contained only in the binary data of the representation it would be necessary to request the binary data of all the representations and analyze and extract the relevant information in order to discover the information about its adequacy. These multiple requests and extracting the data analysis attachment can sometimes take some time which would result in a long startup time and therefore a poor user experience. [00184] Additionally, the Media Presentation Description may comprise information restricting customer requests based on time of day. For example, for a live service the client might be restricted to requesting parts of the presentation that are close to the "current broadcast time". This is an advantage as live broadcast may be desirable to purge data from the content server infrastructure that has been broadcast more than a given limit before the current broadcast time. This may be desirable for reusing storage resources within the server infrastructure. This may also be desirable depending on the type of service offered, for example, in some cases a presentation may only be available live due to a certain subscription model of the customer receiving devices, whereas other media presentations may be made available live and on-demand, and other performances can only be made available live for a first-class client device, only on-demand for a second-class client device, and a combination of live or on-demand for a third-class client device. client devices. The methods described in the Media Presentation Data Model (below) allow the customer to be informed of such policies so that the customer can avoid making requests and adjusting offers for the user, for data that may not be available in the server infrastructure . As an alternative, for example, the client can present a notification to the user that this data is not available. [00185] In a further embodiment of the invention, the media segments may conform to the ISO Base Media File Format described in ISO/IEC 14496-12 or derived specifications (such as the 3GP file format described in the Specification 3GPP Technique 26,244). The use of the 3GPP File Format section (above) describes new enhancements to the ISO Base Media File Format allowing efficient use of data structures of that file format within a block request sequencing system. As described in this reference, information can be provided within the file allowing fast and efficient mapping between media presentation time segments and byte ranges within the file. The media data itself can be structured according to the Film Fragment construction defined in ISO/IEC 14496-12. This information providing time and byte offsets can be structured hierarchically or as a single block of information. This information can be provided with the beginning of the file. Providing this information using efficient encoding as described in the 3GPP File Format section results in the client being able to retrieve this information quickly, for example using partial HTTP GET requests, in the case of the file download protocol used by the block request sequencing is HTTP, which results in a short initialization, seek or sequence switching time, and therefore an improved user experience. [00186] The representations in a media presentation are synchronized on a global timeline to ensure continuous switching across representations, typically being alternates, and to ensure the synchronized presentation of two or more representations. Therefore, the media sample timing contained in representations within an adaptive HTTP sequencing media presentation can be related to a continuous global timeline across multiple segments. [00187] A media block containing encoded media of multiple types, eg audio and video, may have different presentation end times for different media types. In a block request sequencing system, such blocks of media can be played back consecutively such that each type of media is played continuously and thus media samples of one type of a block can be played before the samples of media of another type from the previous block, which is referred to here as "continuous joining of blocks". As an alternative, such media blocks can be played in such a way that the earliest sample of any type of a block is played after the last sample of any previous block type, which is referred to here as "block discontinuous join". Block continuous join may be suitable when both blocks contain media of the same content item and the same representation, encoded in sequence, or in other cases. Typically, within a representation, continuous block joining can be applied when joining two blocks. This is advantageous as existing coding can be applied and segmentation can be done without having to align the media rails at block boundaries. This is illustrated in Figure 10, where video sequence 1000 comprises block 1202 and other blocks, with RAPs such as RAP 1204. Media Presentation Description [00188] A media presentation can be viewed as a structured collection of files on an HTTP Sequencing server. The HTTP sequencing client can download enough information to present the sequencing service to the user. Alternative representations may comprise one or more 3GP files or parts of 3GP files conforming to the 3GP file format or at least a well-defined set of data structures that can be easily converted to or from a 3GP file. [00189] A media presentation can be described by a media presentation description. MPD can contain metadata that the client can use to construct the appropriate file requests, eg GET HTTP requests, to access the data at the proper time, and to provide the sequencing service to the user. The media presentation description can provide enough information for the HTTP sequencing client to select the proper 3GPP files and parts of the files. Units that are flagged to the customer as being accessible are referred to as segments. [00190] Among others, a media presentation description may contain elements and attributes as follows. [00191] Media Presentation Description Element [00192] An element encapsulating metadata used by the HTTP Sequencing Client to provide a sequencing service to the end user. The Media Presentation Description element can contain one or more of the following attributes and elements. Version: Version number for protocol to ensure extensibility. PresentationIdentifier: Information so that the presentation can be uniquely identified among other presentations. It can also contain private fields or names. UpdateFrequency: Media presentation description update frequency, ie how often the client can reload the actual media presentation description. If not present, the media presentation may be static. Updating the media presentation may mean that the media presentation cannot be temporarily stored. MediaPresentatioDescriptionURI: URI to date the media presentation description. Stream: Describes the type of Stream or media presentation: video, audio, or text. A video stream type can contain audio and it can contain text. Service: Describes the type of service with additional attributes. Service types can be live or on-demand. This can be used to inform the customer that searching and accessing beyond some current time is not allowed. MaximumClientPreBufferTime: A maximum amount of time the client can pre-store the media sequence. This timing can differentiate sequencing from progressive download if the client is restricted to downloading beyond this maximum pre-storage time. [00193] Value may not be present indicating that no restrictions in pre-storage terms apply. SafetyGuardIntervalLiveService: Information about the maximum return time of a live service on the server. Provides an indication to the customer that information is already accessible at the current time. This information may be necessary if the client and server operate at UTC time and no fair time synchronization is provided. TimeShiftBufferDepth: Information about how much the client can move back in a live service in relation to the current time. By extending this depth, time change viewing and tracking services can be enabled without specific changes to service delivery. LocalCachingPermitted: This indicator indicates whether the HTTP Client can temporarily store downloaded data locally after it has been replayed. LivePresentationInterval: Contains time intervals during which the presentation can be available by specifying StartTimes and EndTimes. StartTime indicates the start time of services and EndTime indicates the end time of the service. If EndTime is not specified then the end time is unknown at this current time and UpdateFrequency can ensure that clients have access to the end time before the actual end time of the service. OnDemandAvailabilityInterval: The presentation interval indicates the availability of the service on the network. Multiple presentation intervals can be provided. The HTTP client may not be able to access the service outside any specified time window. By providing Interval On Demand, additional time shift view can be specified. This attribute can also be present for a live service. In case of being present for a live service, the server can ensure that the customer can access the service as an On Demand Service during all availability intervals provided. Therefore, the LivePresentationInterval may not override any OnDemandAvailabilityInterval. MPDFileInfoDynamic: Describes the default dynamic construction of files in media presentation. More details are provided below. The default specification at the MPD level can save unnecessary repetition if the same rules for many or all alternative representations are used. MPDCodecDescription: Describes the main default codecs in media presentation. More details are provided below. The default specification at the MPD level can save unnecessary repetition if the same codecs for several or all representations are used. MPDMoveBoxHeaderSizeDoesNotChange: An indicator to indicate whether the MoveBox Header changes size between individual files within the entire media presentation. This indicator can be used to optimize downloading and may only be present in the case of specific segment formats, especially those for which the segments contain the moov header. FileURIPattern: A pattern used by Customer to generate Request messages for files within the media presentation. The different attributes allow the generation of unique URIs for each of the files within the media presentation. The base URI can be an HTTP URI. Alternative Representation: Describes a list of representations. Alternative Representation Element: [00194] An XML element that encapsulates all metadata for a representation. AlternativeRepresentation Element can contain the following attributes and elements. RepresentationID: A unique ID for that specific Alternate Representation within the media presentation. FilesInfoStatic: Provides an explicit list of start times and URI of all files in an alternate presentation. Static provisioning of the file list can provide the advantage of an accurate timing description of the media presentation, but it may not be as compact, especially if the alternate representation contains many files. Also, names can be arbitrary names. FilesInfoDynamic: Provides an implicit way to build the start point list and URI of an alternative presentation. Dynamic file list provisioning can provide the advantage of a more compact representation. If only sequence of start times is provided, then timing advantages also apply, and file names must be dynamically constructed based on FilePatternURI. If only the duration of each segment is given then the representation is compact and may be suitable for use within live services, but file generation may be governed by global timing. APMoveBoxHeaderSizeDoesNotChange: An indicator that indicates whether the MoveBox Header changes size between individual files within the Alternate Description. This indicator can be used to optimize downloading and may only be present in the case of specific segment formats, especially those for which the segments contain the moov header. APCodecDescription: Describes the main codecs of the files in the alternate presentation. Media Description Element MediaDescription: An element that can encapsulate all metadata for the media that is contained in this representation. It may specifically contain information about the tracks in this alternate presentation in addition to recommended grouping of tracks, if applicable. The MediaDescription Attribute contains the following attributes: TrackDescription: An XML attribute that encapsulates all metadata for the media that is contained in this representation. TrackDescription Attribute contains the following attributes: TrackID: A unique ID for the track within the alternate representation. It can be used in case the trail is part of a cluster description. Bitrate: The bitrate of the rail. TrackCodecDescription: An XML attribute that contains all attributes in the codec used in this track. The TrackCodecDescription Attribute contains the following attributes: MediaName: An attribute defining the media type. Media types include "audio", "video", "text", "application" and "message". Codec: CodecType including a profile and level. LanguageTag: LanguageTag if applicable. MaxWidth, MaxHeight: for video, video height and width contained in pixel. SamplingRate: For audio, sampling rate. GroupDescription: An attribute that provides the recommendation to the customer for proper grouping based on different parameters. GroupType: A type based on which the customer can decide how to group the rails. [00195] The information in a media presentation description is advantageously used by an HTTP sequencing client to perform requests for files/segments or parts thereof at appropriate times, select segments from suitable representations that match their capabilities, and so on in addition to user preferences such as language, and so on. Additionally, since the Media Presentation description describes representations that are time-aligned and mapped to a global timeline, the client can also use the information in the MPD during an ongoing media presentation to initiate the appropriate actions to switch through the representations, to present the representations together or to search within the media presentation. Signal Segment Initial Moments [00196] A representation can be divided, in the sense of time, into multiple segments. An interrail timing problem exists between the last fragment of one segment and the next fragment of the next segment. Additionally, another timing problem exists in case constant duration segments are used. [00197] Using the same duration for each segment can have the advantage that MPD is compact and static. However, each segment can still start at a Random Access Point. In this way, the video encoding may be restricted to provide Random Access Points at those specific points, or the actual segment durations may not be precisely as specified in MPD. It may be desirable that sequencing systems do not impose unnecessary restrictions on the video encoding process and thus the second option may be preferred. [00198] Specifically, if file duration is specified in MPD as d seconds, then file n can start with Random Access Point at or immediately after time (n-1)d. [00199] In this approach, each file can include information about the exact starting time of the segment in terms of global presentation time. Three possible ways to signal this include: (1) First, constrain the start time of each segment to the exact timing as specified in MPD. But then the media encoder may not have any flexibility in placing IDR frames and may display special encoding for file sequencing. (2) Second, add the exact start time to the MPD for each segment. For the on-demand case, the compactness of MPD can be reduced. For the live case, it may require a regular MPD update, which can reduce scalability. (3) Third, add the overall time or exact start time with respect to the announced start time of the representation or the announced start time of the MPD segment for the segment in one without the segment containing this information. This can be added to a new box dedicated to adaptive sequencing. This box can also include information as provided by the TIDX or SIDX box. A consequence of this third approach is that when searching for a particular position near the beginning of one of the segments the client can, based on MPD, choose the segment subsequent to the one containing the necessary search point. A simple answer in this case might be to move the search point forward to the beginning of the retrieved segment (that is, to the next Random Access Point after the search point). Typically, Random Access Points are provided at least every few seconds (and there is often little coding gain to make them less frequent) and so in the worst case the search point can be moved to be a few seconds later specified. Alternatively, the client may determine upon retrieving header information for the segment that the requested search point is actually in the previous segment and request that segment. This can result in an occasional increase in the time required to perform the seek operation. List of Accessible Segments [00200] The media presentation comprises a set of representations, each providing some different encoding version for the original media content. The representations themselves advantageously contain information about differentiating parameters of the representation compared to other parameters. They also contain, explicitly or implicitly, a list of accessible segments. [00201] Segments can be differentiated into timeless segments containing only metadata and media segments that basically contain media data. MPD advantageously identifies and assigns different attributes to each of the segments, implicitly or explicitly. The attributes advantageously assigned to each segment comprise the period during which a segment is accessible, the resources and protocols through which the segments are accessible. Additionally, media segments are advantageously assigned attributes such as segment start time in media presentation, and segment duration in media presentation. [00202] Where the media presentation is of the "on demand" type, as advantageously indicated by an attribute in the media presentation description such as OnDemandAvailabilityInterval, then the media presentation description typically describes all segments and also provides indication when the segments are accessible and when segments are not accessible. The beginning moments of segments are advantageously expressed with respect to the beginning of the media presentation so that two clients starting to play the same media presentations, but at different times, can use the same media presentation description in addition to the same media segments . This advantageously improves the temporary storage capacity of the segments. [00203] Where the media presentation is of the "live" type, as advantageously indicated by an attribute in the media presentation description such as Service Attribute, then the segments comprising the media presentation beyond the current time of day are generally not generated or at least not accessible despite the segments being fully described in the MPD. However, with the indication that the media presentation service is of the "live" type, the customer can produce a list of accessible segments along with the timing attributes for an internal customer NOW time at clock time based on information contained in the MPD and the download time of the MPD. The server operates advantageously in a sense that makes the resource accessible so that a reference client operating with the MPD case at the NOW clock time can access the resources. [00204] Specifically, the reference client produces a list of accessible segments along with the timing attributes for an internal NOW client time at clock time based on the information contained in the MPD and the download time of the MPD. As time progresses, the client will use the same MPD and create a new accessible segment list that can be used to continuously play the media presentation. Therefore, the server can advertise the segments in an MPD before those segments are actually accessible. This is beneficial as it reduces frequent MPD updating and downloading. [00205] It is assumed that a list of segments, each with a start time, tS, is described explicitly by a playlist on elements such as FileInfoStatic or implicitly by using an element such as FileInfoDynamic. An advantageous method of generating a list of segments using FileInfoDynamic is described below. Based on this construction rule, the client has access to a list of URIs for each representation, r, referred to here as a FileURI (r,i), and an initial time tS(r,i) for each segment with index i. [00206] The use of information in the MPD to create the accessible time window of segments can be performed using the following rules. [00207] For an "on-demand" type service, as advantageously indicated by an attribute such as Service, if the current clock time on the NOW client is within any availability range, advantageously expressed by an MPD element such as OnDemandAvailabilityInterval, then all segments described in this On Demand presentation are accessible. If the current clock time on the NOW client is outside of any availability range, then none of the segments described in this On Demand presentation are accessible. [00208] For a "live" type service, as advantageously indicated by an attribute such as Service, the initial moment tS(r,i) advantageously expresses the availability time in the clock time. The initial time of availability can be derived as a combination of the live event service time and some server turnaround time for capture, encoding and publishing. The time for this process can for example be specified in the MPD, for example using a specified tG safety guard interval, for example specified as SafetyGuardIntervalLiveService in the MPD. This provides the minimal difference between UTC time and data availability on the HTTP sequencing server. In another modality, MPD explicitly specifies the segment availability time in the MPD without providing the turnaround time as a difference between the live event time and the turnaround time. In the following descriptions, it is assumed that any global times are specified as availability times. Those skilled in the live media broadcast technique can derive this information from appropriate information in the media presentation description after reading that description. [00209] If the current clock time in the NOW client is outside of any live performance interval range, advantageously expressed by an MPD element such as LivePresentationInterval, then none of the described segments of that live performance is accessible. If the current clock time on the NOW client is within the live performance range then at least certain segments of the described segments of that live performance may be accessible. [00210] The restriction of accessible segments is governed by the following values: The NOW clock time (as available to the client). The allowed time shift storage depth tTSB, for example, specified as TimeShiftBufferDepth in the media presentation description. A client at relative event time ti can only request segments with initial moments tS(r,i) in the range of (NOW-tTSB) and NOW or in an interval such that the final moment of the segment with duration d is also included resulting in a range of (NOW - Ttsb - d) and NOW. MPD Update [00211] In some embodiments, the server does not know in advance the file or segment location and the initial moments of the segments such as the server location will change, or the media presentation will include some advertisements from a different server, or the duration of the media presentation is unknown, or the server wants to obfuscate the locator for the following segments. [00212] In such modalities, the server can only describe the segments that are already accessible or will become accessible soon after this MPD case has been published. Additionally, in some embodiments, the client advantageously consumes media close to the media described in the MPD so that the user experiences the contained media program as close as possible to the media content generation. As soon as the client anticipates that it has reached the end of the media segments described in the MPD, it advantageously requests a new MPD case to continue with continuous playback in the expectation that the server has published a new MPD describing new media segments. The server advantageously generates new MPD cases and updates MPD so that clients can rely on procedures for continuous updates. The server can adapt its MPD update procedures along with segment generation and publication of procedures from a reference client that acts as a regular client. [00213] If a new MPD case only describes a short time advance, then clients need to frequently request new MPD cases. This can result in the ability to escalate issues and unnecessary uplink and downlink traffic due to frequent unnecessary requests. [00214] Therefore, it is relevant on the one hand to describe the segments as much as possible in the future without making them necessarily accessible yet, and on the other hand, if you allow updates not foreseen in the MPD to express new server locations, allow insertion of new content such as advertisements or if you provide changes to codec parameters. [00215] Additionally, in some modalities, the duration of the media segments may be small, such as in the range of several seconds. The duration of the media segments is advantageously flexible to adjust the appropriate segment sizes that can be optimized for distribution or temporary storage of properties, to compensate for end-to-end lag in live services or other aspects dealing with storage or distribution. of segments, or for other reasons. Especially in cases where segments are small compared to the media presentation duration, then a significant amount of media segment features and start times need to be described in the media presentation description. As a result, the size of the media presentation description can be large which can adversely affect the media presentation description download time and therefore affect the media presentation initialization delay and also the use of bandwidth on the access link. Therefore, it is advantageous not only to allow description of a list of media segments using playlists, but also to allow description by using templates or URL construction rules. Templates and URL construction rules are used synonymously in this description. [00216] Additionally, templates can be advantageously used to describe segment locators in live cases beyond the current moment. In such cases, MPD updates are in themselves unnecessary as locators beyond the segment list are described by templates. However, unforeseen events can still occur and require changes to the representations or segments description. Changes to an HTTP sequencing media presentation description may be necessary when content from multiple different sources is merged, for example when the ad was placed. Content from different sources can differ in a variety of ways. Another reason, during live presentations, is that it may be necessary to change the URLs used for content files to provide fail-over from one live origin server to another. [00217] In some arrangements, it is advantageous that if the MPD is updated, then updates to the MPD are carried out so that the updated MPD is compatible with the previous MPD in the sense that the reference customer and therefore any customer implemented generates an identically functional list of accessible segments from the updated MPD for any time up to the validity time of the previous MPD as if it had done from the previous MPD case. This requirement ensures that (a) customers can immediately start using the new MPD without syncing with the old MPD as it is backward compatible with the old MPD prior to the upgrade time; and (b) the update time does not need to be synchronized with the time the actual MPD change occurs. In other words, MPD updates can be announced in advance and the server can replace the old MPD case once new information is available without having to maintain different versions of the MPD. [00218] Two possibilities can exist for media timing through an MPD update for a set of representations or all representations. (a) the existing global timeline continues through the MPD update (referred to here as "continuous MPD update") or (b) the current timeline ends and a new timeline begins with the segment following the change (referred to as here as a "discontinuous MPD update"). [00219] The difference between these options can be evident when considering that the rails of a Media Fragment, and thus of a Segment, generally do not start and end at the same time due to different sample detailing across the rails. During normal presentation, samples from one lane of a fragment may be created before some samples from another lane of the previous fragment, ie there is some kind of overlap between the fragments although there may not be overlap within a single lane. [00220] The difference between (a) and (b) is whether such an overlap can be allowed through an MPD update. When the MPD upgrade is due to the merging of completely separate content, such an overlay is often difficult to achieve as the new content needs recoding to be merged with the previous content. It is therefore advantageous if it provides the ability to discontinuously update the media presentation by timeline reset for certain segments and possibly also define a new set of representations after the update. Also, if the content has been independently encoded and segmented, then you also avoid adjusting the timestamps to fit into a global timeline of the previous piece of content. [00221] When updating is for minor reasons such as just adding new media segments to the list of described media segments, or if the location of URLs is changed then overlapping and continuous updates may be allowed. [00222] In case of a discontinuous MPD update, the timeline of the last segment of the previous representation ends at the final moment of the last presentation of any sample in the segment. The timeline of the next representation (or, more precisely, the first moment of presentation of the first media segment of the new media presentation part, also referred to as the new period) typically and advantageously starts at the same moment as the end of the presentation of the last period so that continuous playback is guaranteed. [00223] The two cases are illustrated in figure 11. [00224] It is preferable and advantageous to restrict MPD updates to segment boundaries. The rationale for restricting such changes or updates to segment boundaries is as follows. First, changes to the binary metadata for each representation, typically the Movie Header, can occur at least across segment boundaries. Second, the Media Presentation Description can contain the pointers (URLs) to the segments. In a sense MPD is the "umbrella" data structure grouping all segment files associated with the media presentation. In order to maintain this contention relationship, each segment can be referenced by a single MPD and when the MPD is updated, it advantageously updates only on a segment boundary. [00225] Segment boundaries do not generally need to be aligned, however, for the case of content merged from different sources, and for discontinuous MPD updates it usually makes sense to align the segment boundaries (specifically, the last segment of each representation may end in the same video frame and may not contain audio samples with a presentation start time later than that frame's presentation time). A discontinuous update can then start a new set of representations at a common time, referred to as a period. The starting time of validity of this new set of representations is given, for example, by a period starting time. The relative start time of each representation is reset to zero and the period start time places the set of representations in that new period on the global media presentation timeline. [00226] For continuous MPD updates, segment boundaries do not necessarily need to be aligned. Each segment of each alternate representation may be governed by a single Media Presentation Description and thus update requests for new cases of the Media Presentation Description, usually triggered by the anticipation that none of the additional media segments are described in operational MPD, they can occur at different times depending on the consumed set of representations including the set of representations that are anticipated to be consumed. [00227] To support updates to MPD elements and attributes in a more general case, any elements not just the representations or the set of representations can be associated with a validity time. So, if certain elements of the MPD need to be updated, for example where the number of representations is changed or the URL construction rules are changed, then these elements can each be updated individually at specific times by providing multiple copies of the element with different validity times. [00228] The validity is advantageously associated with the global media time, so that the described element associated with a validity time is valid in a period of the global timeline of the media presentation. [00229] As discussed above, in an embodiment, validity times are just additional to a complete set of representations. Each complete set then forms a period. The validity time then forms the starting point of the period. In other words, in a specific case of validity element usage, a complete set of representations can be valid for a period of time, indicated by an overall validity time for a set of representations. The validity time of a set of representations is referred to as a period. At the beginning of a new period, the validity of the previous set representation is expired and the new set of representations is valid. Note again that the validity times of the periods are preferably different. [00230] As noted above, Media Presentation Description changes occur at segment boundaries, and therefore for each representation, the change in one element actually occurs at the next segment boundary. The client can then form a valid MPD including a list of segments for each moment within the media presentation time. [00231] Discontinuous block union may be suitable in cases where blocks contain media data of different representations, or of different content, for example, of a content segment and an advertisement, or in other cases. It may be necessary in a block request sequencing system for changes to presentation metadata to occur only at block boundaries. This can be advantageous for implementation reasons as updating the media decoder parameters within a block can be more complex than updating them just between blocks. In that case, it can be advantageously specified that the validity intervals as described above can be interpreted as approximate, so that an element is considered valid from the first block boundary not before the start of the validity interval specified for the first block boundary. block, not before the end of the specified validity interval. [00232] An illustrative modality of the above describes new improvements to a block request sequencing system described in the section presented later entitled Changes in Media Presentations. Segment Duration Signaling [00233] Discontinuous updates effectively divide the presentation into a series of separate intervals, referred to as a period. Each period has its own timeline for media sample timing. The media timing of the representation within a period can be advantageously indicated by specifying a separate compact list of segment durations for each period or for each representation within a period. [00234] An attribute, for example, referred to as a period start time, associated with elements within the MPD may specify the validity time of certain elements within the media presentation time. This attribute can be added to any elements (attributes that can be given a validity can be changed to elements) of the MPD. [00235] For discontinuous MPD updates the segments of all representations may end in discontinuity. This generally implies at least that the last segment before the discontinuity has a different duration than the previous ones. Signaling the segment duration may involve indicating that all segments have the same duration or indicating a separate duration for each segment. It may be desirable to have a compact representation for a list of segment durations that is efficient in case many of them have the same duration. [00236] The durations of each segment in a representation or a set of representations can be advantageously realized with a single sequence that specifies all segment durations for a single interval from the beginning of the discontinuous update, that is, the beginning of the period until the last segment of media described in the MPD. In one embodiment, the format of this element is a production-conforming text string that contains a list of segment duration entries where each entry contains a duration attribute lasts an optional attribute multiplier mult indicating that this representation contains <mult > of the first input segments of duration <dur> of the first input, then <mult> of the second input segments of duration <dur> of the second input, and so on. [00237] Each duration entry specifies the duration of one or more segments. If the <dur> value is followed by "*" and a number, then that number specifies the number of consecutive segments of that duration, in seconds. If the "*" multiplier sign is absent, the number of segments equals one. If "*" is present without any number after, then all subsequent segments have the specified duration and there may be no additional entries in the list. For example, the string "30*" means that all segments have a duration of 30 seconds. The string "30*12 10.5" indicates 12 segments of duration of 30 seconds, followed by one of duration of 10.5 seconds. [00238] If segment durations are specified separately for each alternative representation, then the sum of segment durations within each range can be equal for each representation. In the case of video trails, the range can end at the same frame in each alternate representation. [00239] Those skilled in the art, after reading this description, can find similar and equivalent ways to express segment durations in a compact way. [00240] In another embodiment, the duration of a segment is signaled to be constant for all segments in the representation except for the last one by a <duration> signal duration attribute. The duration of the last segment before a discontinuous update can be shorter as long as the start point of the next discontinuous update or the start of a new period is provided, which then implies the duration of the last segment reaching the start of the next period. Changes and Updates to Representation Metadata [00241] Indication of binary encoded representation metadata changes such as "moov" movie header changes can be accomplished in different ways: (a) there can be a moov box for the entire representation in a separate file referred to in the MPD, (b) there can be a moov box for each alternative representation in a separate file referred to in each Alternative Representation, (c) each segment can contain a moov box and is therefore self-contained, (d) there can be a moov Box for every the representation in a 3GP file together with MPD. [00242] Note that in the case of (a) and (b), the single "moov" can be advantageously combined with the concept of validity from above in a sense that more "moov" boxes can be referred to in an MPD as long as its validity is separate. For example, by setting a period limit, the validity of "moov" in the old period can expire with the beginning of the new period. [00243] In the case of option (a), the reference to the single moov box may receive a validity element. Multiple presentation headers can be allowed, but only one can be valid at a time. In another embodiment, the validity time of the entire set of representations in a period or the entire period as defined above can be used as a validity time for this representation metadata, typically provided as the moov header. [00244] In the case of option (b), reference to the moov box of each representation can receive a validity element. Multiple impersonation headers can be allowed, but only one can be valid at a time. In another embodiment, the validity time of the entire representation or the entire period as defined above can be used as a validity time for such representation metadata, typically provided as the moov header. [00245] In the case of option (c), no signaling in the MPD can be added, but additional signaling in the media string can be added to indicate whether the moov box will switch to any of the next segments. This is further explained below in the context of "Flag Updates within Segment Metadata". Signaling Updates Within Segment Metadata [00246] To avoid frequent updates of the media presentation description to gain insight into potential updates, it is advantageous to flag any updates along with the media segments. An additional element or elements may be provided within the media segments themselves that may indicate that updated metadata such as the media presentation description is available and needs to be accessed within a certain amount of time to successfully continue the creation of accessible segment lists. Additionally, such elements can provide a file identifier, such as a URL, or information that can be used to construct a file identifier, for the updated metadata file. The updated metadata file may include the same metadata as provided in the original metadata file for the modified presentation to indicate the validity ranges along with the additional metadata also accompanied by the validity ranges. Such an indication may be provided in media segments of all representations available for a media presentation. A client accessing a block request sequencing system, detecting such an indication within the block of media, may utilize the file download protocol or other means to retrieve the updated metadata file. The customer is thus provided with information about changes in the media presentation description and the time at which these will or will occur. Advantageously, each customer requests the updated media presentation description only once when such changes occur rather than "searching" and receiving the file many times for possible updates or changes. [00247] Examples of changes include adding or removing representations, changes to one or more representations such as changing bit rate, resolution, aspect ratio, added rails or codec parameters, and changes to URL construction rules, for example, a different origin server for an ad. Some changes may only affect the startup segment such as the Movie Head ("moov") atom associated with a representation, while other changes may affect the Media Presentation Description (MPD). [00248] In the case of on-demand content, these changes and their timing can be known in advance and can be flagged in the Media Presentation Description. [00249] For live content, changes may not be known to the extent that they occur. One solution is to allow the Media Presentation Description available at a specific URL to be dynamically updated and require customers to regularly request this MPD in order to detect changes. This solution has drawbacks in terms of scalability (source server load and staging efficiency). In a situation with large numbers of viewers, the staging stores can receive many requests for MPD after the previous version has expired from the staging store and before the new version has been received and all of this can be sent to the origin server. The origin server may need to constantly process requests from staging stores for each updated version of the MPD. Also, updates may not be easily time-aligned with changes in media presentation. [00250] Since one of the advantages of HTTP Sequencing is the ability to use standard network infrastructure and services for scalability, a preferred solution might involve only "static" files (that is, files that can be temporarily stored) and not relying on clients "going through" files to see if they have changed. [00251] Solutions are discussed and proposed to solve metadata update including media presentation description and binary representation metadata such as "moov" atoms in an adaptive HTTP Sequencing media presentation. [00252] For the case of live content, the points at which the MPD or "moov" can change may not be known when the MPD is built. Since frequent "polling" of the MPD to check for updates should generally be avoided, for reasons of bandwidth and scalability, MPD updates can be indicated "in-band" in the segment files themselves, ie, each media segment can have the option to indicate updates. Depending on the segment formats (a) to (c) from the above, different updates may be flagged. [00253] Generally, the following indication can be advantageously provided in a signal within the segment: an indicator that the MPD can be updated before requesting the new segment within that representation or any next segment that has the start time greater than the starting time of the current segment. The update can be announced in advance indicating that the update only needs to take place in any segment after the next. This MPD update can also be used to update binary representation metadata such as Movie Headers in case the media segment locator is changed. Another sign may indicate that with the end of that segment, no segment that advances the time will be requested. [00254] In case the segments are formatted according to the segment format (c), that is, each media segment can contain auto initialization metadata such as a movie header, then another signal can be added indicating that the subsequent segment contains an updated Movie Header (moov). This advantageously allows the movie header to be included in the segment, but the Movie Header only needs to be requested by the customer if the previous segment indicates a Movie Header Update or in the case of search or random access when switching representations. In other cases, the client can issue a byte range request for the segment that excludes the movie header from the download, thereby advantageously saving bandwidth. [00255] In another additional modality, if the MPD Update indication is flagged, then the sign can also contain a locator such as URL for the updated Media Presentation Description. The updated MPD can describe the presentation both before and after the update, using validity attributes such as new and old period in the case of discontinuous updates. This can be used advantageously to allow time change viewing as further described below, but also to advantageously allow the MPD update to be signaled anytime before the changes it contains begin. The customer can immediately download the new MPD and apply it to the ongoing presentation. [00256] In a specific embodiment, the flag of any changes to the media presentation description, the moov headers or the end of the presentation may be contained in a sequencing information box that is formatted following the rules of the segment format using the box structure of the ISO base media file format. This box can provide a specific signal for any of the different updates. Sequencing Information Box Definition Box type: "sinf" Container: None required: no quantity: zero or one. The Sequencing Information Box contains information about the sequence presentation of which the file is a part Syntax aligned(8) class StreamingInformationBox extends FullBox('sinf'){ unsigned int(32)streaming_information_flags; /// The following are optional fields string mpd_location } Semantics streaming_information_flags contains a logical OR equal to zero or more of the following: 0x00000001 Movie Header Update follows 0x00000002 Presentation Description Update 0x00000004 Presentation End mpd_location is present if and only if the Presentation Description Update indicators are configured and provides a Uniform Resource Locator for new Media Presentation Description. Illustrative Use Case for MPD Updates for Live Services [00257] It is assumed that a service provider wishes to provide a football event using the improved block request sequencing described here. Perhaps millions of users may wish to access the event presentation. The live event is sporadically interrupted by breaks when a timeout is called, or other lull in the action, during which ads can be added. Typically, there is no news of advance or little news of exact timing of breaks. [00258] The service provider may need to provide redundant infrastructure (eg encoders and servers) to allow for continuous exchange in case any of the components fail during the live event. [00259] It is assumed that a user, Ana, accesses the service on a bus with her mobile device, and the service is immediately available. Next to her sits another user, Paul, who watches the event on his laptop. A goal is set and both celebrate that event at the same time. Paul tells Ana that the first objective in the game was even more exciting and Ana uses the service so she can observe the event 30 minutes ago. After having observed the objective, she returns to the live event. [00260] To solve this use case, the service provider must be able to update the MPD, signal to customers that an updated MPD is available, and allow customers to access the sequencing service so that it can present the data close to real time. [00261] Updating the MPD is possible asynchronously to the distribution of segments, as explained elsewhere here. The server can provide assurance to the receiver that an MPD will not be updated for some time. The server can be based on the current MPD. However, no explicit signaling is required when the MPD is updated before some minimum update period. [00262] Fully synchronized playback is hardly achieved since the client can operate in different cases of MPD update and therefore the clients may have changed. Using MPD updates, the server can carry changes and clients can be alerted to changes, even during a presentation. In-band signaling on a segment-by-segment basis can be used to indicate the MPD update, so that updates can be limited to segment boundaries, but should be acceptable in most applications. [00263] An MPD element can be added and provides the publish time in the clock time of the MPD plus an optional MPD update box that is added at the beginning of the segments to signal that an MPD update is needed. Updates can be performed in a hierarchical fashion, as with MPDs. [00264] The "publishing time" of MPD provides a unique identifier for the MPD and when the MPD was issued. It also provides an anchor for upgrade procedures. [00265] The MPD update box can be found in the MPD after the box "styp", and defined by a Box Type = "mupe", not requiring any container, not being mandatory and having a quantity of zero or one. The MPD update box contains information about the media presentation of which the segment is a part. Illustrative syntax is as follows: aligned(8) class MPDUpdateBox extends FullBox('mupe'){ unsigned int(3) mpd information flags; unsigned int(1) new-location flag; unsigned int(28) latest_mpd_update time; /// The following are optional fields string mpd_location } The semantics of various objects of the MPDUpdateBox class should be as follows: mpd_information_flags: Logical OR or zero or more of the following: 0x00 Update Media Presentation Description now 0x01 Update Presentation Description of Media later 0x02 End of presentation 0x03-0x07 reserved new_location flag: is set to 1, so the new Media Presentation Description is available in a new location specified in mpd_location. latest_mpd_update time: Specifies the time (in ms) when the MPD update is required in relation to the MPD output time of the last MPD. The customer can choose to update the MPD at any time between now. mpd_location: is present and if only if new_location_flag is set and if so, mpd_location provides a Uniform Resource Locator for the new Media Presentation Description. [00266] If the bandwidth used by updates is a Time Change View and Networked PVR [00267] When time shift preview is supported, it may happen that for the lifetime of the session two or more MPDs or Movie Headers are valid. In this case by updating the MPD when necessary, but adding the validity mechanism or period concept, a valid MPD can exist for the entire time window. This means that the server can guarantee that any MPD and movie headers are advertised for any length of time that is within the valid time window for the time shift view. It is the customer's obligation to ensure that their available MPD and metadata for their current performance time are valid. Migration from a live session to a network PVR session using only mirrored MPD updates may also be supported. Special Media Segments [00268] One problem when the ISO/IEC 14496-12 file format is used within a block request sequencing system is that, as described above, it can be advantageous to store the media data for a single version of the presentation in multiple files, arranged in consecutive time segments. Additionally, it can be advantageous to have each file start with a Random Access Point. Additionally, it can be advantageous to choose the positions of the search points during the video encoding process and to segment the presentation into multiple files each starting with a search point based on the choice of search points that were made during the process. encoding, where each Random Access Point may or may not be located at the beginning of a file, but where each file starts with a Random Access Point. In a modality with the properties described above, the presentation metadata, or Media Presentation Description, can contain the exact duration of each file, where the duration is taken, for example, to mean the difference between the initial time of the media from video of one file and the start time of the video media of the next file. Based on this information in the presentation metadata the client can build a mapping between the global timeline for the media presentation and the local timeline for the media within each file. [00269] In another modality, the size of the presentation metadata can be advantageously reduced by specifying instead of each file or segment having the same duration. However, in this case, and where media files are built according to the above method the duration of each file may not exactly equal the duration specified in the media presentation description as a Random Access Point may not exist in the point which is exactly the specified duration of the beginning of the file. [00270] A further embodiment of the invention to provide correct operation of the block request sequencing system despite the discrepancy mentioned above is now described. In this method, an element can be provided within each file that specifies the mapping of the local media timeline within the file (by which we want to mean the timeline starting from the zero timestamp against which the stamps from decoding time and composition of the media samples in the file are specified according to ISO/IEC 1449612) for the global presentation timeline. This mapping information can comprise a single timestamp in global presentation time that corresponds to timestamp zero in the local file timeline. The mapping information may alternatively comprise an offset value specifying the difference between the global presentation time corresponding to the zero time stamp on the local file timeline and the global presentation time corresponding to the start of the file according to the information. provided in the presentation metadata. [00271] Example of such boxes could, for example, be the rail fragment decode time box ('tfdt') or the rail fragment adjustment box ('tfad') together with the media adjustment box of trail fragment ('tfma'). Illustrative Customer Including Segment List Generation [00272] An illustrative client is now described. It can be used as a reference client to the server to ensure proper MPD generation and updates. [00273] An HTTP sequencing client is driven by the information provided in the MPD. The client is considered to have access to the MPD that is received at time T, that is, the time it was able to successfully receive an MPD. Determining successful reception may include the client obtaining an updated MPD or the client verifying that the MPD has not been updated since the previous successful reception. [00274] The behavior of an illustrative customer is introduced. To provide a continuous sequencing service to the user, the client first analyzes the MPD and creates a list of accessible segments for each representation for local and client time at current system time, taking into account the list generation procedures. segment as detailed below possibly using playlists or using URL construction rules. Then, the client selects one or several representations based on information in the representation attributes and other information, for example, available bandwidth and client capabilities. Depending on the grouping representations can be presented alone or together with other representations. [00275] For each representation, the client acquires binary metadata such as 'moov' header for the representation, if present, and the media segments of the selected representations. The client accesses the media content by requesting segments or segment byte ranges, possibly using the segment list. The customer can initially store media before starting the presentation and, once the presentation has started, the customer continues to consume the media content by continuously requesting segments or parts of segments taking into account the MPD update procedures. [00276] The client can switch representations taking into account the updated MPD information and/or the information updated from that environment, for example, change of available bandwidth. With any request for a media segment containing a random access point, the client can switch to a different representation. When moving forward, that is, the current system time (referred to as "NOW moment" to represent time relative to the presentation) moving forward, the client consumes accessible segments. With each advance in NOW time, the client possibly expands the list of accessible segments for each representation according to the procedures specified here. [00277] If the end of the media presentation has not yet been reached and if the current playing time is within a limit for which the customer anticipates sending media described in the MPD for any consumption or to be representation, then the customer can request an update of the MPD, with a new reception time of collection time T. Once received, the client then takes into account the possibly updated MPD and the new time T in generating the accessible segment lists. Figure 29 illustrates a procedure for live services at different times on the client. Accessible Segment List Generation [00278] It is assumed that the HTTP sequencing client has access to an MPD and may wish to generate an accessible segment list for a NOW clock time. The client is synchronized with a global time reference with a certain precision, but advantageously no direct synchronization with the HTTP sequencing server is needed. [00279] The accessible segment list for each representation is preferably defined as a pair of lists of a segment start time and segment locator where the segment start time can be defined as being relative to the start of the representation without loss of generality. The beginning of the representation can be aligned with the beginning of a period or if this concept is applied. Otherwise, the beginning of the representation may be at the beginning of the media presentation. [00280] The client uses URL construction and timing rules, for example, as further defined here. Once a list of described segments is obtained, that list is further restricted to accessible ones, which can be a subset of the segments of the complete media presentation. The build is governed by the current value of the clock at the client's NOW time. Generally, segments are only available for any NOW time within a set of availability times. For NOW moments outside of this window, no segments are available. Additionally, for live services, the verification time is considered to provide the geometric axis of media time documented by MPD; when the client's playing time reaches the verification time, and advantageously requests a new MPD. [00281] when the client's playing time reaches a verification time, it advantageously requests a new MPD. [00282] Then, the segment list is further constrained by the check time along with the MPD TimeShiftBufferDepth attribute so that only the available media segments are the ones for which the sum of the media segment's start time and the media segment's start time of representation lie in the range of NOW minus TimeShiftBufferDepth minus the duration of the last described thread and the smallest value of the check time or NOW. Scalable Blocks [00283] Sometimes the available bandwidth is so low that the block or blocks currently being received at a receiver are unlikely to be completely received in time to be played without pausing the performance. The receiver can detect such situations in advance. For example, the receiver can determine that it is receiving 5 media block encoding units every 6 units of time, and has a 4 media unit store, so the receiver can expect to have an interrupt, a pause in the presentation about 24 time units later. With sufficient notice, the recipient can react to such a situation, for example, by abandoning the current sequence of blocks and starting to request a block or blocks of a different content representation, such as one that uses less bandwidth per unit of time. reproduction. For example, if the receiver has switched to a representation where encoded blocks for at least 20% more video time for the same block size, the receiver may be able to eliminate the need to interrupt until the width situation bandwidth is improved. [00284] However, it can be wasteful to have the receiver completely discard data already received from the abandoned representation. In one embodiment of a block sequencing system described here, data within each block can be encoded and arranged in such a way that certain prefixes of data within the block can be used to continue the presentation without the remainder of the block having been received. . For example, well-known scalable video encoding techniques can be used. Examples of such video encoding methods include the H.264 Scalable Video Coding (SVC) or the H.264 Advanced Video Coding (AVC) time scaling capability. Advantageously, this method allows the presentation to continue based on the portion of a block that was received even when reception of a block or blocks may be abandoned, for example, due to changes in available bandwidth. Another advantage is that a single data file can be used as the source for multiple different representations of content. It is possible, for example, by using partial HTTP GET requests to select the subset of a block corresponding to the required representation. [00285] A detailed improvement here is an improved segment, a scalable segment map. The scalable segment map contains the locations of different layers in the segment so that the client can access the parts of the segments accordingly and extract the layers. In another modality, the media data in the segment is ordered so that the segment quality is increasing while downloading data gradually from the beginning of the segment. In another modality, a gradual increase in quality is applied for each block or fragment contained in the segment, so that fragment requests can be made to solve the scalable approach. [00286] Figure 12 is a figure illustrating an aspect of the scalable blocks. In this figure, a transmitter 1200 sends metadata 1202, scalable layer 1 (1204), scalable layer 2 (1206), and scalable layer 3 (1208), with the latter being delayed. A receptacle 1210 can then use metadata 1202, the scalable layer 1 (1204), and the scalable layer 2 (1206) to present 1212 media presentation. Independent Scalability Layers [00287] As explained above, it is undesirable that a block request sequencing system needs to interrupt when the receiver is unable to receive the requested blocks of a specific representation of the media data in time for its reproduction, as this often creates a bad user experience. Interruptions can be avoided, reduced or mitigated by restricting a data rate of the chosen representations to be much less than the available bandwidth, so that it becomes very likely that any given part of the presentation will not be received in time, but this strategy has the disadvantage that the media quality is necessarily much lower than what the available bandwidth could in principle support. An inferior quality presentation that is possible can also be interpreted as a poor user experience. Thus, the designer of a block request sequencing system is faced with a choice in customer procedure design, customer programming, or hardware configuration to request a content version that has a data rate much lower than the width available bandwidth, in which case the user may suffer from poor media quality, or request a version of content that has a data rate close to the available bandwidth, in which case the user may suffer a high probability of pauses during the presentation as available bandwidth changes. [00288] To handle such situations, the block sequencing systems described here can be configured to handle multiple layers of scaling capability independently, so that a receiver can perform layer requests and a transmitter can respond to layer requests. [00289] In such embodiments, the encoded media data for each block can be divided into multiple separate pieces, referred to herein as "layers", so that a combination of layers comprises all the media data for a block such that one client that has received certain subsets of layers can perform the decoding and presentation of a representation of the content. In this approach, the ordering of the data in sequence is such that contiguous bands are increasing in quality and the metadata reflects this. [00290] An example of a technique that can be used to generate layers with the above property is the SVC technique, for example, as described in the ITU-T H.264/SVC standards. Another example of a technique that can be used to generate layers with the above property is the time scaling capability layers technique provided in the ITU-T H.264/AVC standard. [00291] In these modalities, metadata can be provided in the MPD or in the segment itself that allows the construction of requests for individual layers of any given block and/or combinations of layers and/or a given layer of multiple blocks and/or a combination of layers of multiple blocks. For example, layers comprising a block can be stored within a single file and metadata can be provided specifying the byte ranges within the file corresponding to the individual layers. [00292] A file download protocol capable of specifying byte ranges, eg HTTP 1.1, can be used to request individual layers or multiple layers. Additionally, as will be clear to those skilled in the art upon review of this description, the techniques described above pertaining to building, ordering and downloading variable size blocks and variable combinations of blocks can be applied in this context as well. combinations [00293] A number of modalities are now described that can be advantageously employed by a block request sequencing client in order to achieve an improvement in the user experience and/or a reduction in server infrastructure capacity requirements compared to existing techniques by using layered media data as described above. [00294] In a first modality, the known techniques of a block request sequencing system can be applied with the modification that different versions of the content are in some cases replaced by different combinations of layers. That is, where an existing system can provide two distinct representations of content the enhanced system described here can provide two layers, where a representation of content in the existing system is similar in bit rate, quality, and possibly other metrics to the first layer in the enhanced system and the second representation of content in the existing system is similar in bit rate, quality and possibly other metrics to the combination thereof, the storage capacity needed within the enhanced system is reduced compared to that needed in the existing system. Additionally, while existing system clients can issue requests for blocks of one representation or another representation, enhanced system clients can issue requests for the first or both layers of a block. As a result, the user experience on both systems is similar. Additionally, enhanced temporary storage is provided even for common segments of different qualities used which are then more likely to be temporarily stored. [00295] In a second embodiment a client in an improved block request sequencing system employing the described layer method can now maintain a separate data store for each of the various media encoding layers. As will be clear to those skilled in the technique of managing data within client devices, these "separate" stores can be implemented by allocating physically and logically separate memory regions to the separate stores or by other techniques in which the stored data is stored in a single region or multiple regions of memory, and the separation of data from different layers is logically achieved through the use of data structures that contain references to data storage locations from separate layers, and then thereafter, the The term "separate stores" is to be understood to include any method in which data from distinct layers can be separately identified. The client issues requests for individual layers of each block based on the occupancy of each store, for example, the layers can be ordered in a priority order so that a request for data from one layer cannot be issued if the occupancy of any store for a lower layer in the order of priority is below a threshold for that lower layer. In this method, priority is given to receiving data from the lower layers in priority order so that if the available bandwidth is below what is needed to also receive higher layers in priority order then only the lower layers are requested. Additionally, the limits associated with the different layers can be different, so that, for example, the lower layers have higher limits. In case the available bandwidth changes so that data for a higher layer cannot be received before the block play time, then data for lower layers will have necessarily already been received and hence the presentation can continue with lower layers only. Limits for store occupancy can be defined in terms of data bytes, playback duration of data contained in the store, number of blocks, or any other suitable measure. [00296] In a third embodiment, the methods of the first and second embodiments can be combined so that multiple media representations are provided each comprising a subset of layers (as in the first embodiment) and so that the second embodiment is applied to a subset of layers within a representation. [00297] In a fourth modality the methods of the first, second and/or third modality can be configured with the modality where multiple independent representations of the content are provided such that, for example, at least one of the independent representations comprises multiple layers to which the techniques of the first, second and/or third modalities are applied. Advanced Store Manager [00298] In combination with the store monitor 126 (see figure 2), an advanced store manager can be used to optimize a client side store. Block request sequencing systems want to ensure that media playback can start quickly and continue smoothly, while simultaneously providing maximum media quality to the target user or device. This may require the client to request blocks that have the highest media quality, but that can also be started quickly and received in time to play without forcing a pause in the presentation. [00299] In the modalities that use the advanced storage manager, the manager determines which media data blocks to request and when to make these requests. An advanced store manager can, for example, be provided with a set of metadata for the content to be presented, this metadata including a list of available representations for the content and metadata for each representation. Metadata for a representation can comprise information about the representation data rate and other parameters, such as video, audio and other codecs and codec parameters, video resolution, decoding complexity, audio language and any other parameters that may affect the choice of representation on the client. [00300] The metadata for a representation can also comprise identifiers for the blocks in which the representation has been segmented, these identifiers providing information necessary for the client to request a block. For example, where the request protocol is HTTP, the identifier could be an HTTP URL possibly together with additional information identifying a byte range or time range within the file identified by the URL, that byte range or time range identifying the block within the file identified by the URL. [00301] In a specific implementation, the advanced store manager determines when a receiver makes a request for new blocks and can handle the sending of requests. In a new aspect, the advanced storage manager performs requests for new blocks according to the value of a balance ratio that balances between using too much bandwidth and running out of media during a sequencing replay. [00302] The information received by storage monitor 126 from block store 125 may include indications of each event when media data is received, how much was received, when media data playback started or thermal, and the media playback speed. Based on this information, storage monitor 126 can calculate a variable representing a current storage size, Bcurrent. In these examples, Bcurrent represents the amount of media contained in a client or other store or device stores and can be measured in units of time so that Bcurrent represents the amount of time it takes to play all the media represented by the blocks or blocks partials stored in the store or stores if no additional blocks or partial blocks are received. As such, Bcurrent represents the "play duration", with normal playback speed, of media data available on the client but not yet played. [00303] As time passes, the value of Bcurrent will decrease as the media is played and may increase each time new data for a block is received. Note that, for the purposes of this explanation, a block is considered to be received when all the data in that block is available in the block requester 124, but other measures can be used instead, for example, taking into account the reception of partial blocks. In practice, reception of a block can take place over a period of time. [00304] Figure 13 illustrates a variation of the value of Bcurrent with time, as media is played and blocks are received. As illustrated in Figure 13, the value of Bcurrent is equal to zero for moments less than t0, indicating that no data was received. At t0, the first block is received and the value of Bcurrent increases to equal the playback duration of the received block. At this point, playback has not yet started and the value of Bcurrent remains constant, until time t1, where a second block arrives and Bcurrent increases by the size of that second block. At that point, playback starts and the value of Bcurrent starts decreasing linearly, until time t2 when a third block arrives. [00305] The progression of Bcurrent continues in this "sawtooth" form, increasing each time a block is received (at times t2, t3, t4, t5 and t6) and decreasing smoothly as the data is played back. Note that in this example, playback proceeds at a normal playback rate for the content and thus the slope of the curve between block reception is exactly -1, meaning that one second of media data is played for every second of real time that passes. With frame-based media played at a certain number of frames per second, eg 24 frames per second, the skew of -1 will be approximated by small scaling functions that indicate playback of each individual data frame, eg -1/24 steps of a second when each frame is played. [00306] Figure 14 illustrates another example of the evolution of Bcurrent through time. In this example, the first block arrives at t0 and playback starts immediately. Block arrival and playback continue until time t3, where the value of Bcurrent reaches zero. When this happens, no additional media data is available for playback, forcing the media presentation to pause. At time t4, a fourth block is received and playback can resume. This example, therefore, illustrates a case in which the reception of the fourth block was later than desired, resulting in a pause in playback and thus a poor user experience. Thus, one goal of advanced store manager and other features is to reduce the probability of this event while simultaneously maintaining high media quality. [00307] Storage monitor 126 can also calculate another metric, Bratio(t), which is the ratio of media received in a given time period to the length of the time period. More specifically, Bratio(t) is equal to Treceived/(Tnow-t), where Treceived is the amount of media (measured by its playtime) received in the time period t, some time before the current time to the time current, Today. [00308] Bratio(t) can be used to measure the rate of change of Bcurrent. Bratio(t)=0 is the case where no data has been received since time t; Bcurrent will have been reduced (Tnow-t) since then, assuming the media is playing. Bratio(t)=1 is the case in which the media is received in the same amount as it is being played, for the moment (Tnow-t); Bcurrent will have the same value as time Tnow as at time t. Bratio(t)>1 is the case where more data was received than needed for playback for the moment (Tnow -t); Bcurrent will have increased from moment t to moment Tnow. [00309] Storage monitor 126 additionally computes a State value, which can take a discrete number of values. Storage monitor 126 is additionally equipped with a function, NewState(Bcurrent, Bratio) which, based on the current value of Bcurrent and the Bratio values for t<Tnow, provides a new State value as output. Every time Bcurrent and Bratio cause this function to return a value different from the current State value, the new value is assigned to State and that new State value is assigned to block selector 123. [00310] The NewState function can be evaluated with reference to the space of all possible values of the pair (Bcurrent, Bratio(Tnow-Tx)) where Tx can be a fixed value (configured), or can be derived from Bcurrent, for example , by a configuration table that maps from B current values to Tx values, or it can depend on the previous value of State. Store monitor 126 is supplied with one or more divisions of this space, where each division comprises sets of separate regions, each region being annotated with a State value. The evaluation of the NewState function then comprises the operation of identifying a division and determining the region in which the pair (Bcurrent, Bratio (Tnow-Tx)) lies. The return value is then the annotation associated with that region. In a simple case, only one split is provided. In a more complex case, the division may depend on the pair (Bcurrent, Bratio(Tnow-Tx)) at the previous time of evaluation of the NewState function or other factors. [00311] In a specific modality, the division described above can be based on a configuration table containing a number of limit values for Bcurrent and a number of limit values for Bratio. Specifically, consider the threshold values for Bcurrent as Bthresh(0)=0, Bthresh(1),...,Bthresh(ni), Bthresh(ni+1) = ~, where n2 is the number of threshold values for Bratio. These threshold values define a division comprising a grid of cells (n1+1) by (n 2+1), where cell i of row j corresponds to the region in which Bt resh (i-1)<=B current< Btresh( i) and Br-thresh(j-1)<= Bratio<Br-thresh(j). Each cell in the grid described above is annotated with a value of state, such as being associated with particular values stored in memory, and the NewState function then returns the state value associated with the cell indicated by the values Bcurrent and Bratio(Tnow-Tx) . [00312] In an additional modality, a hysteresis value can be associated with each threshold value. In this improved method, the evaluation of the NewState function can be based on a temporary division constructed using a set of temporarily modified threshold values, as follows. For each threshold Bcurrent value that is less than the Bcurrent range corresponding to the cell chosen in the last NewState evaluation, the threshold value is reduced by subtracting the hysteresis value associated with that threshold. For each Bcurrent threshold value greater than the Bcurrent range corresponding to the cell chosen in the last NewState evaluation, the threshold value is increased by adding the hysteresis value associated with that threshold. For each Bratio threshold value that is less than the Bratio range corresponding to the cell chosen in the last NewState evaluation, the threshold value is reduced by subtracting the hysteresis value associated with that threshold. For each Bratio threshold value that is greater than the Bratio range corresponding to the cell chosen in the last NewState evaluation, the threshold value is increased by adding the hysteresis value associated with that threshold. Modified threshold values are used to evaluate the value of NewState and then threshold values are returned to their original values. [00313] Other ways of defining space divisions will be obvious to those skilled in the art upon reading this description. For example, a division can be defined by using inequalities based on the linear combinations of Bratio and Bcurrent, eg linear inequality bounds of the form α1*Bratio+α2*Bcurrent^α0 for real-valued α0, α1 and α2, for defining half spaces within the general space and defining each separate set as the intersection of a number of such half spaces. [00314] The above description is illustrative of the basic process. As will be clear to those skilled in the real-time programming technique after reading this description, efficient implementations are possible. For example, at each time new information is provided to store monitor 126, it is possible to calculate the future time in which NewState will transition to a new value if, for example, no additional data for blocks is received. A timer is then set for that time and in the absence of expiration of additional entries from that timer causes the new State value to be sent to block selector 123. As a result, computations only need to be performed when new information is provided. to store monitor 126 or when a timer expires, rather than continuously. [00315] Suitable State values can be "Low", "Stable" and "Full". An example of a suitable set of threshold values and the resulting cell grid is illustrated in Figure 15. [00316] In Figure 15, the Bcurrent limits are shown on the horizontal axis in milliseconds, with hysteresis values shown below the "+/- value". Bratio limits are illustrated on the vertical axis per thousand (that is, multiplied by 1000) with the hysteresis values illustrated below as "+/- value". State values are noted in the grid cells as "L", "S" and "F" for "Low", "Stable" and "Full", respectively. [00317] The selector of block 123 receives notifications from the requester of block 124 whenever there is an opportunity to request a new block. As described above, block selector 123 is provided with information as to the plurality of available blocks and metadata for those blocks, including, for example, information about the media data rate of each block. [00318] The media data rate information of a block can understand the actual media data rate of the specific block (that is, the block size in bytes divided by the playback time in seconds), the data rate media of the representation to which the block belongs from a measurement of available bandwidth required, on a sustained basis, to produce the representation to which the block belongs without pauses, or a combination of the above. [00319] Block selector 123 selects blocks based on the State value last indicated by store monitor 126. When this State value is "Stable", block selector 123 selects a block from the same representation as the previously selected block. The selected block is the first block (in playing order) containing media data for a period of time in the presentation for which no media data was previously requested. [00320] When the State value is "Low", block selector 123 selects a block from a representation with a lower media data rate than the previously selected block. A number of factors can influence the exact choice of representation in this case. For example, block selector 123 can be provided with an indication of the aggregate input data rate and can choose a representation with a media data rate that is less than this value. [00321] When the State value is "Total", the block selector 123 selects a block from a representation with a media data rate higher than the previously selected block. A number of factors can influence the exact choice of representation in this case. For example, block selector 123 can be provided with an indication of the aggregate rate of the input data and can choose a representation with a media data rate that is not greater than this value. [00322] Several additional factors can further influence the operation of block selector 123. In particular, the frequency with which the media data rate of the selected block is increased may be limited, even if store monitor 126 continues to indicate the "full" state. Additionally, it is possible for block selector 123 to receive a "full" status indication but there is no higher media data rate block available (for example, since the last selected block has already gone to the media data rate). highest available media). In that case, block selector 123 can delay selection of the next block for a chosen time so that the overall amount of media data stored in block store 125 is limited above. [00323] Additional factors can influence the set of blocks that are considered during the selection process. For example, available blocks may be limited to those representations whose encoding resolution is within the specific range provided to block selector 123. [00324] The block selector 123 can also receive inputs from other components that monitor other aspects of the system, such as availability of computing resources for media decoding. If such resources become scarce, block selector 123 can choose blocks whose decoding is indicated to be the lowest computational complexity within the metadata (for example, representations with lower resolution, or frame rate are generally of lower decoding complexity ). [00325] The modality described above brings a substantial advantage in that the use of the Bratio value in the evaluation of the NewState function within the store monitor 126 allows a greater increase in quality at the beginning of the presentation compared to a method that considers only Bcurrent. Without considering Bratio, a large amount of stored data can accumulate before the system is able to select blocks with a higher media data rate and thus higher quality. However, when the Bratio value is large, it indicates that the available bandwidth is much higher than the media data rate of previously received blocks and that even with relatively little data stored (ie low Bcurrent value ), remains safe if you request higher media data rate blocks and thus higher quality. Likewise, if the Bratio value is low (<1, for example) it indicates that the available bandwidth has dropped below the media data rate of the previously requested blocks, so even if Bcurrent is high, the system will switch for a lower media data rate and thus lower quality, for example, to avoid reaching the point where Bcurrent-0 and playback of media breaks. This improved behavior can be especially important in environments where network conditions and therefore delivery speeds can vary quickly and dynamically, for example, users sequencing mobile devices. [00326] Another advantage is conferred by the use of the configuration data to specify the division of the value space of (Bcurrent Bratio). Such configuration data may be provided to monitor store 126 as part of the presentation metadata or by other dynamic means. Since in practical developments, the behavior of user network connections can be highly variable between users and over time for a single user, it can be difficult to predict the divisions that will work for all users. The possibility of providing such configuration information to users dynamically allows good configurations to be developed over time according to accumulated experience. Variable Request Sizing [00327] A high frequency of requests may be required if each request is a single block and each block encodes a short media segment. If media blocks are short, video playback is moving from block to block quickly, which provides more frequent opportunities for the receiver to adjust or change its selected data rate by changing the representation, improving the probability that the playback can continue without interruption. However, a disadvantage to a high frequency of requests is that they may not be sustainable in certain networks where the available bandwidth is restricted in the client-to-server network, for example, in WAN networks such as 3G and wireless WANs 4G, where the capacity of the customer's data link to the network is limited or may become limited for short or long periods of time due to changing radio conditions. [00328] A high frequency of requests also implies a high load on the server infrastructure, which results in associated costs in terms of capacity requirements. Thus, it would be desirable to have some of the benefits of a high frequency of requests without all of its drawbacks. [00329] In some embodiments of a block sequencing system, high request frequency flexibility is combined with less frequent requests. In this modality, blocks can be constructed as described above and aggregated into segments containing multiple blocks, also described above. At the beginning of the presentation, the processes described above in which each request references requests to a single block or multiple simultaneous blocks are performed to request parts of a block that are applied to ensure fast channel zapping time and therefore a good user experience at start of presentation. Subsequently, when a certain condition, to be described below, is met, the client can issue requests that span multiple blocks in a single request. This is possible because blocks have been aggregated into large files or segments and can be requested using time or byte ranges. Consecutive byte or time ranges can be aggregated into a single byte range and longer time in a single request for multiple blocks, and even discontinuous blocks can be requested in one request. [00330] A basic configuration that can be triggered by the decision of whether to request a single block (or a partial block) or to request multiple consecutive blocks is to make the configuration based on the decision of whether or not the requested blocks should be reproduced or not. For example, there is likely to be a need to move to another presentation soon, so the customer is better off requesting unit blocks, ie small amounts of media data. One reason for this is that if a multiblock request is made when a switch to another presentation might be imminent is that the switch can be made before the last few blocks of the request are played. In this way, downloading these last few blocks can delay the distribution of media data from the representation to which the switch is made, which can cause media playback interruptions. [00331] However, requests for single blocks result in a higher frequency of requests. On the other hand, if there is unlikely to be a need to move to another representation soon, then it may be preferable to make requests for multiple blocks, as all these blocks must be replayed, and this results in a lower request frequency, which can substantially reduce request overhead, especially if it is typical that there is no imminent change in representation. [00332] In conventional block aggregation systems, the quantity requested in each request is not dynamically adjusted, that is, typically, each request is for an entire file, or each request is for approximately the same amount of file as a representation ( sometimes measured in time, sometimes in bytes). Thus, if all requests are smaller then the request overhead is high, whereas if all requests are larger then this increases the chances of media outage events, and/or providing a lower quality of playback of media if lower quality representations are chosen to avoid having to quickly change representations as network conditions vary. [00333] An example of a condition that, when met, can cause subsequent requests for multiple reference blocks, is a limit on store size, Bcurrent. If Bcurrent is below the threshold, then each request issued references a single block. If Bcurrent is greater than or equal to the threshold, then each expedited request references multiple blocks. If a request is issued with reference to multiple blocks, then the number of blocks requested in each single request can be determined in one of several ways. For example, the number can be constant, for example two. Alternatively, the number of blocks requested in a single request may depend on the state of the store and in particular Bcurrent. For example, multiple thresholds can be determined, with the number of blocks requested in a single request being derived from the highest of multiple thresholds which is less than Bcurrent. [00334] Another example of a condition that, when met, can cause requests to refer to multiple blocks is the State variable value described above. For example, when State is "Stable" or "Total" then requests can be issued for multiple blocks, but when State is "Low" then all requests can be for one block. [00335] Another mode is illustrated in Figure 16. In this mode, when the next request is to be issued (determined in step 1300), the current value State and Bcurrent are used to determine the size of the next request. If the State current value is "Low" or the State current value is "Total" and the current representation is not the highest available (determined in step 1310, answer being "Yes"), then the next request is chosen to be short , for example, only for the next block (block determined and request performed in step 1320). The rationale behind this is that these are conditions where there is likely to be a shift in representations very early on. If the State current value is "Stable" or the State current value is "total" and the current representation is as high as possible (determined by step 1310, response being "No"), then the duration of consecutive blocks requested in the next request is chosen to be proportional to a fraction α of Bcurrent for some fixed α<1 (blocks determined in step 1330, request made in step 1340), eg with α=0.4 if Bcurrent=5 seconds, then the next request might be for approximately 2 seconds of blocks, whereas if Bcurrent-10 seconds, then the next request may be for approximately 4 seconds of blocks. A rationale for this is that under these conditions it may be unlikely that a switch to a new representation will be made for an amount of time that is proportional to Bcurrent. Flexible sequencing [00336] Block sequencing systems can use a file request protocol that has a particular underlying transport protocol, for example TCP/IP. At the beginning of a TCP/IP protocol or other transport protocol connection, it can take considerable time to achieve utilization of all available bandwidth. This can result in a "connection initialization penalty" each time a new connection is initiated. For example, in the case of TCP/IP, the connection initialization penalty is due to the time it takes the initial TPC transfer to establish the connection and the time it takes the congestion control protocol to reach full bandwidth utilization available. [00337] In this case, it may be desirable to issue multiple requests using a single connection, in order to reduce the frequency with which the connection initialization penalty is incurred. However, some file transport protocols, for example HTTP, do not provide a mechanism to defer a request other than choosing the transport layer connection and thus incurring a connection initialization penalty when a new connection is established in place of the old one. An issued request may need to be canceled if it is determined that the available bandwidth has changed and a different media data rate is required, that is, there is a decision to switch to a different representation. Another reason for canceling an issued request could be if the user has requested that the media presentation be ended and a new presentation started (perhaps from the same content item at a different point in the presentation or perhaps from a new content item) . [00338] As is known, the connection initialization penalty can be avoided by keeping the connection open and reusing the same connection for subsequent requests and as it is also known the connection can be kept fully utilized if multiple requests are issued at the same time on the same connection (a technique known as "sequencing" in the context of HTTP). However, a disadvantage of issuing multiple requests at the same time, or more generally such that multiple requests are issued before previous requests have been completed over a connection, could be that the connection is then compromised with the transport of the response to these requests, and thus if changes to the requests that should be issued become desirable, then the connection can be closed if necessary by canceling already issued requests that are no longer desirable. [00339] The probability that an issued request needs to be canceled may depend in part on the length of the time interval between the request issuance and the requested block playback time in the sense that when this time interval is high, the probability of a request issued needing to be canceled is also high (since the available bandwidth is likely to change during the interval). [00340] As is known, some file download protocols have the property that a single underlying transport layer connection can be advantageously used for multiple download requests. For example, HTTP has this property, as reusing a single connection for multiple requests avoids the "connection reset penalty" described above for requests beyond the first. However, a disadvantage of this approach is that the connection is compromised to transport the requested data in each request issued, and therefore if a request or requests need to be cancelled, then the connection can be closed, incurring an initialization penalty of connection when a replacement connection is established, or the client can wait to receive data that is no longer needed, incurring a delay in receiving subsequent data. [00341] A modality is now described which retains the advantages of connection reuse without incurring the disadvantage and which also further improves the frequency with which the connections are reused. [00342] The block sequencing system modalities described here are configured to reuse a connection for multiple requests without having to compromise the connection in the beginning to a particular set of requests. Essentially, a new request is issued on an existing connection when requests already issued on the connection have not yet completed but are close to completion. One reason not to wait until existing requests complete is that if previous requests complete then the connection speed might degrade, ie the underlying TCP session might go into an idle state, or the cwnd TCP variable can be substantially reduced, thereby substantially reducing the initial download speed of the new request issued on that connection. One reason for waiting until close to completion before issuing an additional request is because if a new request is issued a long time before the previous requests are completed, then the newly issued request may not start for some substantial period of time, and may it is the case that during this period of time before the re-apply issued begins the decision to make a re-apply is no longer valid, for example, due to a decision to exchange representations. In this way, a modality of clients that implement this technique will issue a new request on a connection as late as possible without reducing the download capabilities of the connection. [00343] The method comprises monitoring the number of bytes received on a connection in response to the last request issued on that connection and applying a test for that number. This can be done by setting up a receiver (or transmitter, if applicable) for monitoring and testing. [00344] If the test passes, then an additional request may be issued on the connection. An example of a suitable test is whether the number of bytes received is greater than a fixed fraction of the requested data size. For example, this fraction might be 80%. Another example of a suitable test is based on the following calculation, as illustrated in Figure 17. In the calculation, R is considered an estimate of the data rate of the connection, T being a Round Trip Time ("RTT") estimate and X the numerical factor which, for example, can be a constant determined for a value between 0.5 and 2, where the estimates of R and T are updated regularly (updated in step 1410). S is the size of the data requested in the last request, B is the number of bytes of the requested data received (calculated in step 1420). [00345] A suitable test would be to have the receiver (or the transmitter, if applicable) run a routine to evaluate the inequality (SB)<X*R*T (tested in step 1430) and if "Yes" then it is performed an action. For example, a test can be done to see if there is another request ready to be issued on the connection (tested in step 1440) and if "yes" then issue that request to the connection (step 1450) and if "no" then the process returns to step 1410 to continue the upgrade and test. If the test result in step 1430 is "no" then the process returns to step 1410 to continue the update and test. [00346] The inequality test in step 1430 (performed by properly programmed elements, for example) causes each subsequent request to be issued when the amount of data remaining to be received is equal to X times the amount of data that can be received at the current estimated reception rate within an RTT. A number of methods for estimating the data rate, R, at step 1410 are known in the art. For example, the data rate can be estimated as Dt/t, where Dt is the number of bits received in the previous t seconds and where t can be, for example, 1 s or 0.5 s or some other interval. Another method is an exponential weighted average, or first-order Infinite Impulse Response (IIR) filter, of the input data rate. A number of methods for estimating RTT,T at step 1410 are known in the art. [00347] The test in step 1430 can be applied to the aggregate of all active connections on an interface, as explained in more detail below. [00348] The method further comprises the construction of a list of candidate requests, association of each candidate request with a set of suitable servers to which the request can be made and ordering the list of candidate requests in order of priority. Some records on the Candidate Request List may have the same priority. Servers on the list of suitable servers associated with each candidate request are identified by host names. Each hostname corresponds to a set of IP addresses that can be obtained from the Domain Name System as is well known. Therefore, each possible request in the candidate request list is associated with a set of IP addresses, specifically the union of sets of Internet Protocol Addresses associated with the host names associated with the servers associated with the candidate request. Every time the test described in step 1430 is matched for a connection, and no new request has yet been issued on that connection, the highest priority request in the candidate request lists with which the IP address of the connection destination is associated is chosen. , and this request is issued on connection. The request is also removed from the list of candidate requests. [00349] Candidate applications can be removed (cancelled) from the candidate list, new applications can be added to the candidate list with a priority that is higher than the applications already on the candidate list, and existing applications on the candidate list can have its priority changed. The dynamic nature of which requests are on the candidate request list can change the requests that can be issued next depending on when a test of the type described in step 1430 has been satisfied. [00350] For example it may be possible that if the answer to the test described in step 1430 is "yes" at some point t then the next request issued would be a request A, whereas if the answer to the test described in step 1430 is not is "yes" until some time t'>t, then the next request issued would be request B, since request A was removed from the list of candidate requests between time t'>t, or because request B was added to the list of candidate requests with higher priority than request A between time t and t', or because request B was on the candidate list at time t, but with a lower priority than request A, and between time tet' the priority of request B has been made larger than that of request A. [00351] Figure 18 illustrates an example of a list of requests in the candidate list of requests. In this example, there are three connections, and there are six requests on the candidate list, labeled A, B, C, D, E, and F. Each of the requests on the candidate list can be issued on a subset of connections as indicated, for example, a request A can be issued on connection 1, whereas request F can be issued on connection 2 or connection 3. The priority of each request is also labeled in Figure 18, and a lower priority value indicates that a request has priority. taller. In this way, requests A and B with a priority of 0 are the highest priority requests, while request F with a priority value of 3 is the lowest priority among requests on the candidate list. [00352] If, at that time t, connection 1 passes the test described in step 1430, then request A or request B is issued on connection 1. If instead, connection 3 passes the test described in step 1430 at that time t, then request D is issued on connection 3, since request D is the highest priority request that can be issued on connection 3. [00353] Assuming that for all connections the response to the test described in step 1430 from time t to some other time t' is "no", and between time tet', request A changes its priority from 0 for 5, request B is removed from the candidate list, and a new request G with a priority of 0 is added to the candidate list. Then, at time t', the new candidate list should be as illustrated in figure 19. [00354] If at time t' connection 1 passes the test described in step 1430, then request C with priority 4 is issued in connection 1, since it is the highest priority request in the candidate list that can be issued in connection 1 at that time. [00355] Assuming that in this same situation, instead of request A having been issued in connection 1 at time t (which was one of the two highest priority choices for connection 1 at time t as illustrated in figure 18). Since the response to the test described in step 1430 from time to some later time t' was "no" for all connections, connection 1 is still distributing data up to the last time t' for requests issued before time t , and thus request A will not have started until at least time t'. Issuing request C at time t' is a better decision than issuing request A at time t would have been, as request C starts at the same time after t' that request A would have started, and since at that time request C has a higher priority than request A. [00356] With another alternative, if the type test described in step 1430 is applied to the aggregate of active connections a connection can be chosen and has a destination whose IP address is associated with the first request in the candidate request list or another request with the same priority as said first request. [00357] A number of methods are possible for building the candidate request list. For example, the candidate list may contain n requests representing requests for the next n pieces of data from the current representation of the presentation in time sequence order, where the request for a previous piece of data has a higher priority and the request for a later part of the data has a lower priority. In some cases, n can be equal to one. The value of n can depend on the size of the Bcurrent store, or the State variable, or another measure of the client store occupancy state. For example, multiple threshold values can be determined for Bcurrent and a value associated with each threshold and then the value of n is considered the value associated with the highest threshold that is less than Bcurrent. [00358] The modality described above ensures flexible allocation of requests for connections, ensuring that preference is given to reusing an existing connection even if the highest priority request is not suitable for that connection (since the destination IP address connection is not one that is allocated to any of the hostnames associated with the request). Dependence of n on Bcurrent or State or another measure of client store occupancy ensures that such "out of priority" requests are not issued when the client is in urgent need of the issuance and completion of the request associated with the next piece of data to be played back in time sequence. [00359] These methods can be advantageously combined with cooperative HTTP and FEC. Consistent Server Selection [00360] As is well known, files to be downloaded using a file download protocol are commonly identified by an identifier comprising a hostname and a filename. For example, this is the case for HTTP protocol in which case the identifier is a URI. A hostname can correspond to multiple hosts, identified by IP addresses. For example, this is a common method of spreading the load of requests from multiple clients across multiple physical machines. In particular, this approach is commonly carried out by CDNs. In that case, a request issued on a connection to any of the physical hosts should be successful. A number of methods are known by which a client can select from the IP addresses associated with a hostname. For example, these addresses are typically provided to the customer through the Domain Name System and are provided in order of priority. A client can then choose the highest priority IP address (first). However, there is generally no coordination between the clients and how this choice is made, with the result that different clients may request the same file from different servers. This can result in the same file being buffered on multiple servers nearby, which reduces the efficiency of the staging infrastructure. [00361] This can be handled by a system that advantageously increases the probability of two clients requesting the same block requesting that block from the same server. The new method described here comprises selecting among the available IP addresses in a way determined by the identifier of the file to be requested and in such a way that different clients presented with the same or similar choices of IP addresses and file identifiers make the same choice. [00362] A first modality of the method is described with reference to figure 20. The client first obtains a set of IP addresses, IP1, IP2,...,IPn as illustrated in step 1710. If there is a file to which it should issue requests, as decided in step 1720, then the client determines which IP address issues requests for the file, as determined in steps 1730-1770. According to a set of IP addresses and an identifier for a file to be requested, the method comprises the ordering of the IP addresses in a way determined by the file identifier. For example, for each IP address a sequence of bytes is constructed comprising the concatenation of the IP address and the file identifier, as illustrated in step 1730. A hash function is applied to that sequence of bytes, as illustrated in step 1740, and the resulting hash values are arranged according to a fixed ordering, as illustrated in step 1750, e.g., ascending numerical order, inducing an ordering on the IP addresses. The same hash function can be used by all clients, thus ensuring that the same result is produced by the hash function in a single entry by all clients. The hash function can be statically configured on all clients in a set of clients, or all clients in a set of clients can get a partial or full description of the hash function when clients get the list of IP addresses, or all clients in a set of clients can get a partial or full description of the hash function when the clients get the file identifier, or the hash function can be determined by other means. The IP address that is first in this order is chosen and that address is then used to establish a connection and issue requests for all or parts of the file, as illustrated in steps 1760 and 1770. [00363] The above method can be applied when a new connection is established to request a file. It can also be applied when a number of established connections is available and one of them can be chosen to issue a new request. [00364] Additionally, when an established connection is available and a request can be chosen from a set of candidate requests with equal priority an ordering of the candidate requests is induced, for example, by the same method of hash values described above and the candidate request that appears first in which order is chosen. The methods can be combined to select both a connection and a candidate request from a set of connections and requests of the same priority, again by computing a hash for each connection and request combination, ordering these hash values according to an ordering fixing and choosing the combination that occurs first in the ordering induced in the set of combinations of requests and connections. [00365] This method has the advantage for the following reason: a typical approach performed by a block server infrastructure such as that illustrated in figure 1 (BSI 101) or figure 2 (BSIs 101) and in particular an approach commonly performed by CDNs, is to provide multiple staging proxy servers that receive client requests. A temporary storage proxy server may not be provided with the file requested in a given request, in which case such servers typically send the request to another server, receive the response from that server, typically including the requested file, and send the response to the client . The staging proxy server can also store (temporary) the requested file so that it can immediately respond to subsequent requests for the file. The common approach described above has the property that the set of files stored on a given staging proxy server is largely determined by the set of requests that the staging proxy server received. [00366] The method described above has the following advantage. If all clients in a set of clients receive the same list of IP addresses, then those clients will use the same IP address for all requests issued to the same file. If there are two different lists of IP addresses and each client receives one of these two lists, then clients will use a maximum of two different IP addresses for all requests issued to the same file. In general, if the IP address lists provided to clients are similar, then clients will use a small set of IP addresses provided for all requests issued to the same file. Since nearby clients tend to receive similar lists of IP addresses, nearby clients are likely to issue requests for a file from only a small portion of the temporary storage proxy servers available to those clients. As such, there will only be a small fraction of temporary storage proxy servers that temporarily store the file, which advantageously minimizes the amount of temporary storage resources used to temporarily store the file. [00367] Preferably, the hash function has the property that a very small fraction of different inputs are mapped to the same output, and that different inputs are mapped to essentially random outputs, to ensure that for a given set of IP addresses, the ratio of files for which a given IP address is first in the sorted list produced by step 1750 is approximately the same for all IP addresses in the list. On the other hand, it is important that the hash function is deterministic, in the sense that for a given input the output of the hash function is the same for all clients. [00368] Another advantage of the method described above is the following. Assuming that all clients in a set of clients receive the same list of IP addresses. Due to the properties of the hash function just described, it is likely that requests for different files from these clients will be evenly spread across the IP address pool, which in turn means that the requests will be spread evenly across the staging proxy servers. In this way, the staging resources for storing these files are spread evenly across the staging proxy servers, and requests for files are spread evenly across the staging proxy servers. In this way, the method provides storage balancing and load balancing through the staging infrastructure. [00369] A number of variations to the approach described above are known to those skilled in the art and in many cases these variations retain the property that the set of files stored in a given proxy is determined at least in part by the set of requests that the server temporary storage proxy received. In the common case where a given hostname relates to multiple physical temporary storage proxy servers, it will be common for all of these servers to eventually store a copy of any given file that is frequently requested. Such duplication can be undesirable as the storage resources of the staging proxy servers are limited and as a result files can occasionally be removed (purged) from the staging memory. The new method described here ensures that requests for a given file are directed to temporary storage proxy servers from temporary memory, thereby increasing the probability that any given file is present (that is, not purged) in the proxy buffer. [00370] When a file is present in the proxy buffer, the response sent to the client is faster, which has the advantage of reducing the probability of the requested file arriving late, which can result in a pause in media playback and, therefore a poor user experience. Additionally, when a file is not present in the proxy buffer, the request can be sent to another server, causing an additional load on both the server infrastructure and the network connections between servers. In many cases, the server to which the request is sent may be at a distant location and transmitting the file from that server back to the staging proxy server may incur transmission costs. Therefore, the new method described here results in a reduction in these transmission costs. Probabilistic Total File Requests [00371] A particular concern in case the HTTP protocol is used with Band requests is the behavior of staging servers that are commonly used to provide scalability in the server infrastructure. While it may be common for HTTP staging servers to support the HTTP Banner header, the exact behavior of different HTTP staging servers varies by implementation. Most staging server implementations serve Range requests from buffer in case the file is available in buffer. A common implementation of HTTP Staging servers always sends HTTP requests downstream containing the Range header to an upstream node unless the staging server has a copy of the file (staging server or origin server). In some implementations, the upstream response to the Range request is the entire file, and this entire file is temporarily stored and the downstream Range request response is extracted from that file and sent. However, in at least one implementation the response upstream of the Range response is just the data bytes in the Range request itself, and these data bytes are not temporarily stored, but instead just sent as a response to the downstream Range request. As a result, the use of Range headers by clients can have the consequence that the file itself is never brought into temporary memory and the network's desirable scalability properties are lost. [00372] Above, the operation of the temporary storage proxy servers was described and also the method of requesting Blocks from a file that is an aggregation of multiple blocks was described. For example, this can be achieved by using the HTTP Banner request header. Such requests are called "partial requests" below. An additional modality is described now and has the advantage in case the server infrastructure of block 101 does not provide full support for the HTTP Strip header. Commonly, servers within a block server infrastructure, for example a Content Delivery Network, support partial requests, but may not store the response for the partial sending of the request to another server, unless the entire file is stored in the local store, in which case the response can be sent without sending the request to another server. [00373] A block request sequencing system that makes use of the new block aggregation enhancement described above may perform poorly if the server block infrastructure exhibits this behavior, as all requests, being parallel requests, will be sent to another server and no requests will be served by the staging proxy servers, failing to achieve the goal of provisioning the staging proxy servers initially. During the block request sequencing process as described above, a client may at some point request a Block that is at the beginning of a file. [00374] According to the new method described here, every time a given condition is met, such requests can be converted from requests for the first Block in a file to requests for the entire file. When a request for an entire file is received by a staging proxy server, the proxy server typically stores the response. Therefore, using these requests causes the file to be brought into the buffer of the local staging proxy servers so that subsequent requests, for the whole file or partial requests can be directly served by the staging proxy server. The condition can be such that out of a set of requests associated with a given file, for example, the set of requests generated by a set of customers viewing the content item in question, the condition is met for at least a given fraction of those requests . [00375] An example of a suitable condition is that a randomly chosen number is above a given threshold. This limit can be configured so that the conversion of a single Block request to a whole file request occurs on average for a given fraction of the requests, for example, once in ten (in which case the random number can be chosen from the interval [0.1] and the limit can be 0.9). Another example of a suitable condition is that a hash function calculated from some information associated with the block and some information associated with the client take one of a given set of values. This method has the advantage that for a file that is frequently requested, the file is brought into the temporary memory of a local proxy server, however, the operation of the block request sequencing system is not significantly changed from of the standard operation in which each request is for a single Block. In many cases where conversion of the request from a single Block request to a file-wide request takes place, client procedures will otherwise proceed to request other Blocks within the file. If that is the case, then such requests can be suppressed as the blocks in question will be received in any case as a result of requesting the entire file. URL Construction and Segment List Generation and Search [00376] Segment list generation deals with the problem of how a customer can generate a segment list from MPD at a specific time of customer and location NOW for a specific representation starting at the same initial time starttime with respect to start of media presentation for cases on demand or expressed in clock time. A segment list can comprise a locator, for example a URL for optional initial representation metadata, in addition to a list of media segments. Each media segment may have been assigned a starttime, duration, and locator. Starttime typically expresses an approximation of the media time of the media contained in a segment, but not necessarily an accurate sample time. The starttime is used by the HTTP sequencing client to issue the download request at the appropriate time. The segment list generation, including the start time of each, can be performed in different ways. URLs can be provided as a playlist or a URL construction rule can be advantageously used for a compact representation of the segment list. [00377] A segment list based on URL construction can, for example, be performed if the MPD signals this by a specific attribute or element such as FileDynamicInfo or an equivalent signal. A generic way to create a segment list from a URL construct is provided below in the "URL Construction Overview" section. A playlist-based build can, for example, be flagged by a different sign. Searching the segment list and obtaining an accurate media time are also advantageously implemented in this context. URL Builder Overview [00378] As described above, in an embodiment of the present invention a metadata file containing URL construction rules that allow client devices to build file identifiers for Presentation Blocks can be provided. An additional new enhancement to the block request sequencing system that provides file metadata changes is now described, including changes to URL construction rules, changes to the number of available encodings, changes to the metadata associated with encodings available such as bit rate, aspect ratio, resolution, audio or video codec or codec parameters or other parameters. [00379] In this new enhancement, additional data associated with each element of the metadata file may be provided indicating a time interval within the overall presentation. Within that time frame the element can be considered valid and otherwise the time frame can be ignored. Additionally, the metadata syntax can be improved so that elements previously allowed to appear only once or at most once can appear multiple times. An additional restriction can be applied in this case and provides that for such elements the specified time intervals must be separated. At any given time, considering only the elements whose time range contains the given time results in a metadata file that is consistent with the original metadata syntax. Such time intervals are called validity intervals. This method therefore provides signaling within single metadata file changes of the type described above. Advantageously, such a method can be used to provide a media presentation that supports changing the type described at specific points within the presentation. URL Builder [00380] As described here, a common feature of block request sequencing systems is the need to provide the client with "metadata" that identify the available media encodings and provide information needed by the client to request blocks from those encodings . For example, in the case of HTTP this information can comprise URLs to files containing the media blocks. A playlist file can be provided listing the URLs for blocks for a given encoding. Multiple playlist files are provided, one for each encoding, along with a main playlist of playlists that list playlists corresponding to different encodings. A disadvantage of this system is that the metadata can become quite large and therefore take some time to be requested when the client starts the sequence. An additional disadvantage of this system is evident in the case of live content, when files corresponding to media data blocks are generated "immediately" from a media sequence that is being captured in real time (live), for example , a live or news sporting event. In this case, playlist files can be updated each time a new block is available (for example, every few seconds). Client devices can repeatedly collect the playlist file to determine if new blocks are available and obtain their URLs. This can place a significant load on the server infrastructure and in particular means that the metadata files cannot be temporarily stored any longer time than the update interval, which is equal to the block size which is commonly on the order of a few seconds. [00381] An important aspect of a block request sequencing system is the method used to inform clients of file identifiers, eg URLs, which must be used, together with the file download protocol, to request the Blocks. For example, a method in which for each representation of a presentation a playlist file is provided that lists the URLs of the files containing the media data blocks. A disadvantage of this method is that at least some of the playlist files themselves need to be downloaded before playback can begin, increasing channel zapping time and therefore causing a poor user experience. For a long media presentation with multiple or many representations, the list of file URLs can be large and therefore the playlist can be large, further increasing channel zapping time. [00382] Another disadvantage of this method occurs in the case of live content. In this case, the complete list of URLs is not made available in advance and the playlist file is periodically updated as new blocks become available and clients periodically request the playlist file in order to receive the updated version. Since this file is frequently updated it cannot be stored for long within the temporary storage proxy servers. This means that many of the requests for that file will be sent to other servers and eventually to the server that generates the file. In the case of a popular media presentation this can result in a high load on this server and the network, which can in turn result in a slow response time and therefore a high channel zapping time and experience. of bad user. In the worst case, the server becomes overloaded and this results in some users being unable to view the presentation. [00383] It is desirable in designing a block request sequencing system to avoid placing restrictions in the form of file identifiers that can be used. This is because a number of considerations can motivate the use of identifiers in a particular way. For example, in case the Block Server Infrastructure is a Content Distribution Network there may be file naming or storage conventions related to a desire to distribute the storage or service load across the network or other requirements that lead to forms particulars of the file identifier that cannot be predicted at system design time. [00384] An additional modality is now described and mitigates the disadvantages mentioned above while retaining the flexibility of choosing suitable file identification conventions. In this method metadata can be provided for each representation of the media presentation comprising a file identifier construction rule. The file identifier construction rule can, for example, comprise a text string. In order to determine the file identifier for a given presentation block, a method of interpreting the file identifier construction rule can be provided, such method comprising determining input parameters and evaluating the file identifier construction rule along with the input parameters. Input parameters can, for example, include an index of the file to be identified, where the first file has an index of zero, the second has an index of one, the third has an index of two, and so on . For example, in case each file spans some length of time (or roughly the same length of time), then the file index associated with any given time within the presentation can be easily determined. Alternatively, the time within the presentation covered by each file can be provided within the presentation or version metadata. [00385] In one embodiment, the file identifier construction rule may comprise a text string which may contain certain special identifiers corresponding to input parameters. The file identifier construction rule evaluation method comprises determining the positions of the special identifiers within the text string and replacing each special identifier with a sequence representation of the corresponding input parameter value. [00386] In another embodiment, the file identifier construction rule may comprise a text string conforming to an expression language. An expression language comprises a definition of a syntax to which expressions in the language can conform and a set of rules for evaluating a string conforming to the syntax. [00387] A specific example will now be described, with reference to figure 21 et seq. An example of a syntax definition for a suitable expression language, defined in the Augmented Backus-Naur Form, is as illustrated in figure 21. An example of the rules for evaluating a sequence according to the production of <expression> in figure 21 comprises the recursive transformation of the sequence conforming to production <expression> (an <expression>) into a sequence conforming to production <literal> as follows: An <expression> conforming to production <literal> is unchanged. An <expression> conforming to output <variable> is replaced by the value of the variable identified by the <token> string of output <variable>. An <expression> conforming to the production of <function> is evaluated by evaluating each of its arguments according to these rules and applying a transformation to those arguments depending on the <token> element of the production of <function> as described bellow. An <expression> conforming to the last alternative of the production <expression> is evaluated by evaluating two <expression> elements and applying an operation of those arguments depending on the <operator> element of the last alternative of the production <expression> as described below. [00388] In the method described above it is considered that the evaluation takes place in a context in which a plurality of variables can be defined. A variable is a (name, value) pair where "name" is a string conforming to the production <token> and "value" is a string conforming to the production <literal>. Some variables can be defined outside of the assessment process before the assessment begins. Other variables can be defined within the assessment process itself. All variables are "global" in the sense that only one variable exists with each possible "name". [00389] An example of a function is the "printf" function. This function accepts one or more arguments. The first argument can conform to the production <sequence> (hereafter a "sequence"). The printf function evaluates a transformed version of its first argument. The applied transformation is the same as the "printf" function from the C pattern library, with the additional arguments included in the output <function> supplying the additional arguments except for the printf function from the C pattern library. [00390] Another example of a function is the "hash" function. This function accepts two arguments, the first of which can be a string and the second of which can conform to the output <number> (hereafter a "number"). The "hash" function applies a hash algorithm to the first argument and returns a result that is a non-negative integer less than the second argument. An example of a suitable hash function is provided in the C function illustrated in Figure 22, whose arguments are the input string (excluding the closing annotation marks) and the numeric input value. Other examples of hash functions are well known to those skilled in the art. [00391] Another example of a function is the "subst" function that takes one, two or three string arguments. In case an argument is supplied the result of the "Subst" function is the first argument. In case two arguments are supplied then the result of the "Subst" function is computed by eliminating any occurrences of the second argument (excluding the closing annotation marks) within the first argument and returning the first argument modified in that way. In case three arguments are supplied then the result of the "Subst" function is computed by replacing any occurrences of the second argument (excluding closing annotation tags) within the first argument with the third argument (excluding closing annotation tags ) and returning the first argument modified that way. [00392] Some examples of operators are the addition, subtraction, division, multiplication and modulus operators, identified by the productions <operator> '+', '-', '/', '*', '%', respectively. These operators require the productions <operator> on either side of the production <operator> to evaluate the numbers. Operator evaluation comprises applying proper arithmetic operation (addition, subtraction, division, multiplication and modulus, respectively) to these two numbers in the normal form and returning the result in a form conforming to the production <number>. [00393] Another example of an operator is the assignment operator, identified by the output <operator> '='. This operator requires the left argument to evaluate to a string the content of which conforms to the output <token>. The content of a string is defined to be the string of characters within the closing annotation tags. The equality operator causes the variable whose name is the <token> equal to the content of the left argument to receive a value equal to the result of evaluating the right argument. This value is also the result of evaluating the operation expression. [00394] Another example of an operator is the sequence operator, identified by the output <operator> ';'. The result of evaluating this operator is the right argument. Note that as with all operators, both arguments are evaluated and the left argument is evaluated first. [00395] In an embodiment of this invention the identifier of a file can be obtained by evaluating a file identifier construction rule according to the above rule with a specific set of input variables that identify the required file. An example of an input variable is the variable with the name "index" and the value equal to the numeric index of the file within the presentation. Another example of an input variable is the variable with the name "bit rate" and the value equals the average bit rate of the required version of the presentation. [00396] Figure 23 illustrates some examples of file identifier construction rules, where the input variables are "id", providing an identifier for the representation of the desired presentation and "seq", providing a sequence number for the file . [00397] As will be clear to those skilled in the art after reading this description, numerous variations of the above method are possible. For example, not all functions and operators described above may be provided or additional functions or operators may be provided. URL Construction Rules and Timing [00398] This section provides basic URI Construction Rules for designating a file or segment URI plus a start time for each segment within a media representation and presentation. [00399] For this clause the availability of a media presentation description on the client is considered. [00400] It is assumed that the HTTP sequencing client is playing media that is downloaded within a media presentation. The actual presentation time of the HTTP client can be defined with respect to where the presentation time is relative to the start of the presentation. During initialization, presentation time t=0 can be considered. [00401] At any time t, the HTTP client can download any data with playtime tP (also with respect to start of presentation) into MaximumClientPreBufferTime in advance of actual presentation time t and any data that is needed due to an interaction with the user, for example, search, advance, etc. In some embodiments, MaximumClientPreBufferTime may not be specified in a sense that a client can unload data before the current playtime tP without restrictions. [00402] HTTP clients can avoid downloading unnecessary data, for example any segments of representations that should not be played may typically not be downloaded. [00403] The basic process in providing sequencing services can be downloading data by generating appropriate requests to download all files/segments of the subset of files/segments, for example, by using GET HTTP requests or GET HTTP requests partial. This description addresses how to access the data for a specific playtime tP, but generally the client can offload the data for a longer playtime time range to avoid inefficient requests. HTTP client can minimize the number/frequency of HTTP requests in providing sequencing service. [00404] To access media data at playtime tP or at least close to playtime tP in a specific representation the client determines the URL for the file containing that playtime and additionally determines the byte range in the file to access this playback time. [00405] The Media Presentation Description can assign a representation id, r, to each representation, for example, by using the RepresentationID attribute. In other words, the MPD content, when written by the ingestion system or when read by the customer, will be interpreted so that there is a designation. In order to download data for a specific playtime tP for a specific representation with id r, the client can construct a suitable URI for a file. [00406] The Media Presentation Description can designate each file or segment of each representation r of the following attributes. [00407] (a) a sequence number i of the file within representation r, with i = 1, 2,..., Nr, (b) the relative start time of the file with representation id r file index I with respect at presentation time, defined as ts(r,i), (c), the file URI for file/segment with id representation and file index I, denoted as FileURI (r,i). [00408] In one embodiment, the file start time and file URIs may be explicitly provided for a representation. In another embodiment, a list of URI files can be explicitly provided where each URI file is inherently indexed i according to position in the list and the segment start time is derived as the sum of all segment durations for the segments. 1 to i-1. The duration of each segment can be given according to any of the rules discussed above. For example, anyone versed in basic math can use other methods to derive a methodology to easily derive the start time of a single element or attribute and the position/index of the URI file in the representation. [00409] If a dynamic URI build rule is provided in the MPD, then the start time of each file and each file URI can be dynamically built by using a build rule, the requested file index and potentially some additional parameters provided in the media presentation description. Information can, for example, be provided in attributes or MPD elements such as FileURIPattern and FileInfoDynamic. The FileURIPattern provides information on how to build the URIs based on the file index sequence number i and the ID r representation. FileURIFormat is constructed as: FileURIFormat=sprintf("%s%s%s%s%s%s", BaseURI, BaseFileName, RepresentationIDFormat, SeparatorFormat, FileSequenceIDFormat, FileExtension); and FileURI(r,i) is constructed as FileURI (r,i)=sprintf(FileURIFormat,r,i); [00410] The relative initial time ts(r,i) for each file/segment can be derived by some attribute contained in the MPD describing the duration of the segments in this representation, for example, the attribute FileInfoDynamic. The MPD can also contain a string of FileInfoDynamic attributes that is global to all representations in the media presentation or at least to all representations in a period as specified above. If media data for a specific playtime tP in representation r is requested, the match index i(r,tP) can be derived as (r,tP) so that the playtime of that index is in the range of initial time of ts(r,i(r,tP)) and ts(r,i(r,tP)+1). Segment access can be further restricted by the above cases, for example the segment is not accessible. [00411] To access the exact playtime tP once the corresponding segment index and URI are obtained depends on the actual segment format. In this example the media segments are assumed to have a local timeline that starts with 0 without loss of generality. To access and present the data at playback time tP the client can download the data corresponding to the local time from the file/segment which can be accessed through URI FileURI(r,i) with i = (r,tp). [00412] Generally, clients can download the entire file and can then access tP playtime. However, not necessarily all the information in the 3GP file needs to be downloaded, as the 3GP file provides structures for mapping local timing to byte ranges. Therefore, only specific byte ranges to access playtime tP may be sufficient to play the media as long as sufficient random access information is available. In addition, sufficient information about the structure and mapping of the byte range and the local timing of the media segment can be provided at the beginning of the segment, for example using a segment index. Having access to, for example, the initial 1200 bytes of the segment, the client can have enough information to directly access the byte range necessary for the tP playtime. [00413] In a further example, it is assumed that the segment index, possibly specified as the box "tidx" as below can be used to identify the byte offsets of the required Fragment or Fragments. Partial GET requests can be formed for the required Fragment or Fragments. There are other alternatives, for example the client can issue a default request for the file and cancel it when the first "tidx" box has been received. Search [00414] A client may attempt to fetch a specific presentation time tp in a representation. Based on the MPD, the client has access to the start time of the media segment and media segment URL of each segment in the representation. The customer can get the segment index segment_index of the segment most likely to contain media samples for the presentation time tp as the maximum segment index i, for which the start time tS(r,i) is less than or equal to the time of presentation tp, that is, segment_index = max {i|tS(r,i)<=tp}. The URL segment is taken as FileURI(r,i). [00415] Note that timing information in the MPD can be approximate, due to problems related to placement of Random Access Points, media rail alignment and media timing change. As a result, the segment identified by the above procedure might start at a slightly later time tp and the media data for the presentation time tp might be in the earlier media segment. In the case of search, the search time can be updated to be equal to a first sample time of the retrieved file, or the previous file can be retrieved instead. However, note that during continuous playback, including cases where there is a switch between alternate representations/versions, the media data for the time between tp and the start of the retrieved segment is nevertheless available. [00416] For the precise search for a time of presentation tp, the HTTP sequencing client needs to access a random access point (RAP). To determine the random access point in a media segment in the case of 3GPP Adaptive HTTP Sequencing, the client can, for example, use the information in the "tidx" or "sidx" box, if present, to locate the access points random numbers, and the corresponding presentation time in the media presentation. In cases where a segment is a 3GPP movie fragment, it is also possible for the client to use the information inside the "moof" and "mdat" boxes, for example, to locate the RAPs and obtain the necessary presentation time of the information in the fragment and the MPD-derived segment start time. If no RAP with presentation time before the requested presentation time tp is available, the client can access the previous segment or can just use the first random access point as the search result. When media segments start with a RAP, these procedures are simple. [00417] Furthermore, it should be noted that not necessarily all the information of the media segment needs to be downloaded to access the presentation time tp. The client can, for example, initially request the "tidx" or "sidx" box from the beginning of the media segment using byte range requests. By using the "tidx" or "sidx" boxes, segment timing can be mapped to segment byte ranges. By continuously using partial HTTP requests, only the relevant parts of the media segment need to be accessed, for an improved user experience and low launch delays. Segment List Generation [00418] As described here, it should be apparent how to directly implement an HTTP sequencing client that uses the information provided by the MPD to create a segment list for a representation that has an approximate signaled segment duration of dur. In some modalities, the client can designate the media segments within a representation of consecutive indexes i = 1, 2, 3,..., that is, the first media segment receives the index i=1, the second segment of media receives the index i=2, and so on. Then the list of media segments with segment indexes i receives startTime[i] and URL[i] is generated, for example, as follows. First, index i is set to 1. The start time of the first media segment is taken as 0, startTime[1]=0. The URL of media segment i, URL[i] is taken as FileURI(r,i). The process continues for all media segments described with index i and the startTime[i] of media segment 1 is taken as (i-1)*dur and the URL[i] is taken as FileURI(r,i). Simultaneous HTTP/TCP Requests [00419] A concern in the block request sequencing system is a desire to always request the highest quality blocks that can be completely received in time for playback. However, the arrival rate may not be known in advance and so it may happen that a requested block does not arrive in time to be played. This results in a need to stop media playback, which results in a poor user experience. This problem can be mitigated by client algorithms that take a conservative block selection approach to request for lower quality (and smaller size) blocks that are more likely to be received on time, even if the arrival rate of data drops during block reception. However, this conservative approach has the downside of possibly distributing lower quality playback to the target user or device, which is also a poor user experience. The problem can be increased when multiple HTTP connections are used at the same time to download different blocks, as described below, as the available network resources are shared across the connections and, therefore, are being used simultaneously for blocks with different play times. [00420] It may be advantageous for the client to issue requests for multiple blocks simultaneously, where in this context "simultaneously" means responses to requests are occurring at overlapping time intervals and it is not necessarily the case that requests are made precisely or even nearly Same time. In the case of the HTTP protocol, this approach can improve the utilization of available bandwidth due to the behavior of the TCP protocol (as is well known). This can be especially important to improve content zapping time, as when new content is first requested the corresponding HTTP/TCP connections through which data for blocks is requested can be slow initially, and thus the using multiple HTTP/TCP connections at this point can dramatically speed up data distribution time for the first few blocks. However, requesting different blocks or fragments over different HTTP/TCP connections can also result in degraded performance as requests for blocks that must be replayed first are competing with requests for subsequent blocks, HTTP/TCP downloads competitors can vary greatly in their delivery speed and thus the time to complete the request can be very variable, and it is generally impossible to control which HTTP/TCP downloads will complete quickly and which will be slower, and thus it is likely that at least some of the time the HTTP/TCP downloads of the first few blocks will be the last to be completed, thus resulting in large and variable channel zapping times. [00421] It is assumed that each block or fragment of a segment is downloaded through a separate HTTP/TCP connection, and that the number of parallel connections is equal to the playback duration of each block is t seconds, and that the rate of sequencing the content associated with the segment is S. When the client first begins to sequence the content, requests can be issued for the first n blocks, representing n*t seconds of media data. [00422] As is known by those skilled in the art, there is a wide variation in the data rate of TCP connections. However, to simplify this discussion, it is ideally assumed that all connections are proceeding in parallel so that the first block is completely received at approximately the same time as the other n-1 requested blocks. To simplify further discussion, it is assumed that the aggregate bandwidth used by download connections is fixed at a value B for the entire duration of the download, and that the sequencing rate S is constant across the entire representation. It is further assumed that the media data structure is such that the playback of a block can be done when the entire block is available on the client, that is, the playback of a block can only be done after the entire block is received. , for example, because of the underlying video encoding structure, or because cryptography is being employed to encrypt each fragment or block separately, and thus the entire fragment or block needs to be received before it can be decrypted. Thus, to simplify the discussion below, it is assumed that an entire block needs to be received before any block can be played. So, the time taken before the first block has arrived and can be played is approximately n*i*S/B. [00423] Since it is desirable to minimize the content zapping time, it is therefore desirable to minimize n*t*S/B. The value of t can be determined by factors such as the underlying video encoding structure and how ingest methods are used, and so i can be reasonably small, but very small values of t lead to an overly complicated segment map and possibly may be incompatible with efficient video encoding and decryption, if used. The value of n can also affect the value of B, that is, B can be larger for a larger number n of connections, and thus, reducing the number of connections, n, has the negative effect of potentially reducing the amount of available bandwidth that is used, B, and may not be efficient in achieving the goal of reducing content zapping time. The value of S depends on which representation is chosen for download and playback, and ideally S should be as close to B as possible in order to maximize media playback quality for the given network conditions. Thus, to simplify this discussion, we assume that S is approximately equal to B. So, the channel zapping time is proportional to n*L. Thus, using more connections to download different fragments can degrade channel zapping time if the aggregate bandwidth used by the connections is sub-linearly proportional to the number of connections, which is typically the case. [00424] As an example, it is assumed that t=1 second, and with n=1 the value of B=500 Kbps, and with n = 2 the value of B = 700 Kbps, and with n = 3 the value of B = 800 Kbps. It is assumed that the representation with S=700 Kbps is chosen. So, with n=1 the download time for the first block is 1*700/700=2 seconds, and with n=3 the download time for the first block is 3*700/800=2.65 seconds. Additionally, as the number of connections increases the variation in individual connection download speeds is likely to increase (although even with one connection there can be some significant variation). Thus, in this example, the channel zapping time and the variation in channel zapping time increase as the number of connections increases. Intuitively, the blocks being distributed have different priorities, that is, the first block has the earliest distribution deadline, the second block has the earliest second deadline, etc. whereas the download connections over which blocks are being distributed are competing for network resources during distribution, and thus blocks with earlier deadlines become delayed as more competing blocks are requested. On the other hand, even in this case, finally, the use of more than one download connection allows support of a sustainably higher sequencing rate, for example, with three connections a sequencing rate of up to 800 Kbps can be supported in this example, whereas only a 500 Kbps stream can be supported with a connection. [00425] In practice, as noted above, the data rate of a connection can be highly variable both within the same connection over time and between connections, and as a result of this, the requested n blocks generally do not complete at the same time and from In fact, it can commonly be the case that a block can be completed in half the time of another block. This effect results in unpredictable behavior as in some cases the first block may be completed much earlier than other blocks and in other cases the first block may be completed much later than the other blocks, and as a result the start of playback may in some cases it occurs relatively quickly and in other cases it can be slow. This unpredictable behavior can be frustrating for the user and can therefore be considered a poor user experience. [00426] What is needed, therefore, are methods in which multiple TCP connections can be used to improve channel zapping time and channel zapping time variation, while at the same time supporting the rate of good quality sequencing possible. What is also needed are methods to allow the sharing of the available bandwidth allocated to each block to be adjusted as a block's playtime approaches, so that, if necessary, a larger portion of the bandwidth. available bandwidth can be allocated in the direction of the block with the closest playing time. Cooperative HTTP/TCP Request [00427] Methods for using simultaneous HTTP/TCP requests cooperatively are now described. A receiver can employ multiple simultaneous cooperative HTTP/TCP requests, for example using a plurality of HTTP byte range requests, where each request is for a part of a snippet in a source segment, or an entire snippet of a source segment, or a part or repair fragment of a repair segment, or for the entire repair fragment of a repair segment. [00428] The advantages of cooperative HTTP/TCP requests along with the use of FEC repair data can be especially important in providing consistently fast channel zapping times. For example, at a time of channel zapping it is likely that TCP connections have just started or have been idle for some period of time, in which case the congestion window, cwnd, is at a minimum value for connections, and Therefore, the distribution speed of these TCP connections will take several RTTs to rise and there will be a high variation in the distribution speeds across the different TCP connections during this rise time. [00429] An overview of the non-FEC method is now described, which is a cooperative HTTP/TCP request method where only the media data from the source blocks is requested using multiple simultaneous HTTP/TCP connections, that is, no FEC repair data is requested. With any non-FEC method, parts of the same fragment are requested over different connections, for example using HTTP byte range requests for parts of the fragment, and so, for example, each HTTP byte range request serves a part of the byte range indicated in the segment map for the fragment. It may be the case that an individual HTTP/TCP request raises its distribution speed to fully utilize the available bandwidth across multiple RTTs, and thus there is a relatively long period of time where the distribution speed is less than the bandwidth of available bandwidth, and thus if a single HTTP/TCP connection is used to download, for example, the first piece of content to be played, the channel zapping time can be large. Using the non-FEC method, downloading different parts of the same fragment through different HTTP/TCP connections can significantly reduce channel zapping time. [00430] An overview of the FEC method is now described, which is a cooperative HTTP/TCP request method where media data from a source segment and FEC repair data generated from the media data are requested using multiple simultaneous HTTP/TCP connections. With the FEC method, parts of the same fragment and FEC repair data generated from that fragment are requested over different connections, using HTTP byte range requests for parts of the fragment, and thus, for example, each request for HTTP byte range is for a portion of the byte range indicated in the segment map for the fragment. It may be the case that an individual HTTP/TCP request increases its distribution speed to fully utilize the available bandwidth across multiple RTTs, and thus there is a relatively long period of time where the distribution speed is less than the bandwidth of available bandwidth, and thus if a single HTTP/TCP connection is used to download, for example, the first piece of content to be played, the channel zapping time can be large. Using the FEC method has the same advantages as the non-FEC method, and has the additional advantage that not all the requested data needs to arrive before the fragment can be retrieved, thus further reducing channel zapping time and variation in channel zapping time. By making requests over different TCP connections, the amount of time it takes to distribute a sufficient amount of data to, for example, retrieve the first requested fragment that allows media playback to start, can be greatly reduced and made much more consistent than if cooperative TCP connections and FEC repair data are not used. [00431] Figures 24(a) to (e) illustrate an example of distribution rate fluctuations of 5 TCP connections running over the same link to the same client from the HTTP network server of an evolved data optimized network copied (EVDO). In Figures 24(a) through (3), the X axis illustrates time in seconds, and the Y axis illustrates the rate at which bits are received at the client through each of the 5 TCP connections measured through intervals of 1 second for each connection. In this particular emulation, there are 12 TCP connections in total running over this link, and thus the network was relatively loaded during the time illustrated, which can be typical when more than one client is sequencing within the same cell of a mobile network . Note that although the distribution rates are somewhat correlated with time, there is a large difference in the distribution rates of the 5 connections at many points in time. [00432] Figure 25 illustrates a possible request structure for a fragment that is 250,000 bits in size (approximately 31.25 kilobytes), where there are 4 byte range requests made in parallel for different parts of the fragment, that is, the the first HTTP connection requests the first 50,000 bits, the second HTTP connection requests the next 50,000 bits, the third HTTP connection requests the next 50,000 bits, and the fourth HTTP connection requests the next 50,000 bits. If FEC is not used, ie the non-FEC method, then these are just 4 requests for the fragment in this example. If FEC is used, ie the FEC method, then in this example there is an additional HTTP connection that requests an additional 50,000 bits of FEC repair data from a repair segment generated from the fragment. [00433] Figure 26 is an increase of the first two seconds of the 5 TCP connections illustrated in figures 24(a) to (e), where in figure 26 the X axis illustrates the time in 100 millisecond intervals, and the geometric axis Y illustrates the rate at which bits are received at the client over each of the 5 TCP connections measured over 100 millisecond intervals. One line illustrates the aggregate amount of bits that were received at the client for the fragment from the first 4 HTTP connections (excluding the HTTP connection over which FEC data is requested), ie, what arrives using the non-FEC method. Another line illustrates the aggregate amount of bits that were received on the client for the fragment from all 5 of the HTTP connections (including the HTTP connection over which FEC data is requested), that is, what arrives using the FEC method . For the FEC method, it is considered that the fragment can be FEC decoded by receiving any 200,000 bits of the requested 250,000 bits, which can be performed if, for example, a FEC Reed-Solomon code is used, and that it can essentially be performed if, for example, the RaptorQ code described in Luby IV is used. For the FEC method, in this example, enough data is received to retrieve the fragment using FEC decoding after 1 second, allowing a channel zapping time of 1 second (whereas data for subsequent fragments can be requested and received earlier the first fragment to be fully reproduced). For the non-FEC method, in this example, all data for 4 requests needs to be received before the fragment can be retrieved, which is after 1.7 seconds, leading to a channel zapping time of 1.7 seconds. Thus, in the example illustrated in Figure 26, the non-FEC method is 70% worse in terms of channel zapping time than the FEC method. One of the reasons for the advantage illustrated by the FEC method in this example is that, for the FEC method, receiving any 80% of the requested data allows for fragment recovery, whereas for the non-FEC method, receiving 100% of requested data is required. Thus, the non-FEC method needs to wait for the slower TCP connection to terminate the distribution, and in view of the natural variations in the TCP distribution rate there is a chance of having a wide variation in the distribution speed of the slower TCP connection compared to an average TCP connection. With the FEC method in this example, a slow TCP connection does not determine when the fragment is recoverable. Instead, for the FEC method, the sufficient data distribution is much greater than a function of the average TCP distribution rate than the worst TCP distribution rate. [00434] There are many variations of the non-FEC method and FEC method described above. For example, cooperative HTTP/TCP requests can be used for only the first few fragments after a channel zap has occurred, and after that, only a single HTTP/TCP request is used to download the additional fragments, multiple fragments, or entire segments. As another example, the number of cooperative HTTP/TCP connections used might be a function of both the urgency of the fragments being requested, that is, how imminent the playtime of these fragments is, and the current network conditions. [00435] In some variations, a plurality of HTTP connections can be used to request repair data from the repair segments. In other variations, different amounts of data may be requested over different HTTP connections, for example, depending on the current size of the media store and the data reception rate on the client. In another variation, the source representations are not independent of each other, but rather represent layered media encoding, where, for example, an improved source representation may depend on a base source representation. In this case, there may be one repair representation corresponding to the base source representation and another repair representation corresponding to the combination of the improved and base source representations. [00436] Additional general elements add are added to the advantages that can be realized by the methods described above. For example, the number of HTTP connections used may vary depending on the actual amount of media in the media store, and/or the reception rate in the media store. Cooperative HTTP requests using FEC, ie the FEC method described above and variations of this method, can be used aggressively when the media store is relatively empty, for example more cooperative HTTP requests are made in parallel to different parts of the first fragment , requesting the entire source fragment and a relatively large fraction of repair data from the corresponding repair fragment, and then transitioning to a reduced number of concurrent HTTP requests, requesting larger pieces of media data per request, and requesting a smaller fraction of data for example, transitioning to 1, 2, or 3 simultaneous HTTP requests, transitioning to perform requests for all fragments or multiple consecutive fragments per request, and transitioning to request no repair data as the media store grows . [00437] As another example, the amount of FEC repair data may vary as a function of media store size, that is, when the media store is small then more FEC repair data may be requested, and as the media store increases so the amount of FEC repair data requested may decrease, and at some point when the media store is large enough then none of the FEC repair data can be requested, only the data from the source segments of the source representations. The benefits of such improved techniques are that they can allow for faster and more consistent channel zapping times, and more resilience against intermittent situations and potential media outages, while at the same time minimizing the amount of width of additional bandwidth used beyond the amount that would be consumed just by distributing media on the source segments by reducing request message traffic and FEC repair data, while at the same time allowing the highest possible media rates to be supported for the conditions. network settings. Additional Improvements When Using Concurrent HTTP Connections [00438] An HTTP/TCP request can be abandoned if a suitable condition is met and another HTTP/TCP request can be made to download the data that can replace the data requested in the abandoned request, where the second HTTP/TCP request can exactly request the same data as in the original request, eg source data; or overlapping data, eg some of the same data as in the original request, eg source data; or overlapping data, for example some of the same source data and repair data that were not requested in the first request; or completely separate data, eg repair data that was not requested in the first request. An example of a suitable condition is that a request fails due to the absence of a response from BSI within a given time or a failure to establish a transport connection to BSI or receipt of an explicit server failure message or other condition of failure. [00439] Another example of a suitable condition is that data receipt normally proceeds slowly, according to a comparison of a connection speed measurement (data arrival rate in response to the request in question) with the speed of expected connection or with an estimate of the connection speed required to receive the response before the playback time of the media data contained, or other time independent of that time. [00440] This approach has the advantage in case the BSI sometimes exhibits crashes or poor performance. In this case the above approach increases the likelihood that the customer can continue to reliably play the media data despite failures or poor performance within BSI. Note that in some cases there may be an advantage to designating the BSI in such a way that it exhibits such flaws or poor performance on occasions, for example, such a design may cost less than an alternative design that does not exhibit such failures or poor performance. or display these less frequently. In that case, the method described here has the additional advantage that it allows the use of such a lower cost design for BSI without a consequent degradation in the user experience. [00441] In another modality, the number of requests issued for data corresponding to a given block may be dependent on whether a suitable condition with respect to the block is met. If the condition is not met then the client can be restricted from making additional requests for the block if successful completion of all currently incomplete data requests for the block allows block recovery with high probability. If the condition is met then a larger number of requests for the block can be issued, ie the above restriction does not apply. An example of a suitable condition is that the time until the block's programmed play moment or other time dependent on that moment is below a given threshold. This method has advantages in that additional requests for data for a block are issued when receiving the block becomes more urgent, as the playback time of media data comprising the block is approaching. In the case of common transport protocols such as HTTP/TCP, these additional requests have the effect of increasing the sharing of available bandwidth dedicated to the data that contributes to the reception of the block in question. This reduces the time required to receive enough data to retrieve the block to complete and therefore reduces the probability that the block cannot be retrieved before the scheduled play time of the media data comprising the block. As described above, if the block is not recovered before the scheduled playback time of media data comprising the block then playback may be interrupted resulting in a poor user experience and therefore the method described here advantageously reduces the probability. of that bad user experience. [00442] It should be understood that throughout this specification references to the scheduled playback time of a block refers to the time at which encoded media data comprising the block may first be available on the client in order to achieve playback of the presentation without break. As will be clear to those skilled in the technique of media presentation systems, this time is, in practice, slightly earlier than the real time of appearance of the media comprising the block in the physical transducers used for reproduction (screen, loudspeaker, etc.) since various transformation functions may need to be applied to the media data comprising the block to perform the actual playback of that block, and these functions may require a certain amount of time to complete. For example, media data is usually transported in compressed form and a decompression transformation can be applied. File Structure Generation Methods Supporting Cooperative HTTP/FEC Methods [00443] A modality for generating a file structure that can be used to advantage by a client employing cooperative HTTP/FEC methods is described now. In this mode, for each source segment there is a corresponding repair segment generated as follows. The R parameter indicates on average how much FEC repair data is generated for the source data in the source segments. For example, R=0.33 indicates that if a source segment contains 1000 kilobytes of data, then the corresponding repair segment contains approximately 330 kilobytes of repair data. The S parameter indicates the symbol size in bytes used for FEC encoding and decoding. For example, S=64 indicates that the source data and repair data comprise symbols of 64 bytes in size each for FEC encoding and decoding purposes. [00444] The repair segment can be generated for a source segment as follows. Each fragment of the source segment is considered a source block for FEC encoding purposes, and thus each fragment is treated as a sequence of source symbols from a source block from which repair symbols are generated. The number of repair symbols in total generated for the first i fragments is calculated as TNRS(i) = ceiling (R*B(i)/S), where ceiling (x) is the function that sends the smallest integer with a value that is at least x. Thus, the number of repair symbols generated for fragment i is NRS(i) = TNRS(i)-TNRS(i-1). [00445] The repair segment comprises a concatenation of the repair symbols to the fragments, where the order of the repair symbols within a repair segment is in the order of the fragments from which they are generated, and within a fragment the symbols repairs are in order from your ESI. The repair segment structure corresponding to a source segment structure is illustrated in Figure 27, including a repair segment generator 2700. [00446] Note that by setting the number of repair symbols for a fragment as described above, the total number of repair symbols for all previous fragments, and thus the byte index in the repair segment, only depends of R, S, B(i-1) and B(i) and does not depend on any prior or subsequent structure of fragments within the source segment. This is beneficial as it allows a customer to quickly compute the position of the start of a repair block within the repair segment, and also quickly compute the number of repair symbols within that repair block, using only local information about the structure. of the corresponding fragment of the source segment from which the repair block was generated. In this way, if a client decides to start downloading and playing a fragment from the middle of a source segment, it can quickly generate and access the corresponding repair block from within the corresponding repair segment. [00447] The number of source symbols in the source block corresponding to fragment i is calculated as NSS(i)=ceiling((B(i)-B(i-1))/S). The last source symbol is padded with zero bytes for FEC encoding and decoding purposes if B(i)-B(i-1) is not a multiple of S, that is, the last source symbol is padded with zero bytes so that is equal to S bytes in size for FEC encoding and decoding purposes, but these zero padding bytes are not stored as part of the source segment. In this mode, the ESIs for the source symbol are 0, 1,...,NSS(i)-1 and the ESIs for the repair symbols are NSS(i),...,NSS(i)+NRS(i )-1. [00448] The URL for a repair segment in this modality can be generated from the URL for the corresponding source segment simply by adding, for example, the suffix ".repair" to the source segment URL. [00449] The repair indexing information and FEC information for a repair segment is implicitly defined by the indexing information for the corresponding source segment and from the values of R and S, as described here. The time offsets and fragment structure comprising the repair segment are determined by the time offsets and structure of the corresponding source segment. The byte offset for the end of the repair symbols in the repair segment corresponding to fragment i can be calculated as RB(i)=S*ceto(R*B(i)/S). The number of bytes in the repair segment corresponding to fragment i is then RB(i)-RB(i-1), and thus the number of repair symbols corresponding to fragment i is calculated as NRS(i)=(RB( i)-RB(i-1)/S. The number of source symbols corresponding to fragment i can be calculated as NSS(i)=ceiling((B(i)-B(i-1))/S. In this embodiment, the repair indexing information for a repair block within a repair segment and the corresponding FEC information can be implicitly derived from R, S and the indexing information for the corresponding fragment of the corresponding source segment. [00450] As an example, consider the example illustrated in Figure 28, illustrating a fragment 2 that starts at byte offset B(1) = 6.410 and ends at byte offset B(2)=6.770. In this example, the symbol size is S=64 bytes, and the dotted vertical lines illustrate the byte offsets within the source segment that correspond to multiples of S. The total repair segment size as a fraction of the source segment size is set to R = 0.5 in this example. The number of source symbols in the source block for fragment 2 is calculated as NSS(2)=ceiling((6.770-6.410)/64=ceiling (5.625)=6, and these 6 source symbols have ESIs 0,..., 5, respectively, where the first source symbol is the first 62 bytes of fragment 2 starting at byte index 6.410 within the source segment, the second source symbol is the next 64 bytes of fragment 2 starting at byte index 6.474 within the source segment, etc. The final byte offset of the repair block corresponding to fragment 2 is calculated as RB(2)=64*ceiling(0.5*6.770/64)=64*ceiling(62.89...) =64*53=3.392 and the initial byte offset of the repair block corresponding to fragment 2 is calculated as RB(1)=64*ceiling(0.5*6.410/64)=64*ceiling(50.07.. .)=64*51=3.264, and thus, in this example, there are two repair symbols in the repair block corresponding to fragment 2 with ESIs 6 and 7, respectively, starting at byte offset 3.264 within the repair segment and ending in a 3,392 byte offset. [00451] Note that in the example illustrated in Figure 28, despite R=0.5 and there are 6 source symbols corresponding to fragment 2, the number of repair symbols is not equal to 3, as you can expect if you use simply the number of source symbols to calculate the number of repair symbols, but instead resulted in 2 according to the methods described here. As opposed to simply using the number of source symbols of a fragment to determine the number of repair symbols, the modalities described above make it possible to calculate the position of the repair block within the repair segment only from the index information associated with the corresponding source block of the corresponding source segment. Additionally, as the number K of source symbols in a source block increases, the number of repair symbols KR of the corresponding repair block is approximated by K*R, as in general, KR is maximum ceiling (K*R) and KR is at least the floor ((K-1)*R), where floor(x) is the largest integer that is at most equal to x. [00452] There are many variations of the above modalities for generating a file structure that can be used to advantage by a client employing cooperative HTTP/FEC methods, as those skilled in the art will recognize. As an example of an alternative modality, an original segment for a representation can be divided into N>1 parallel segments, where for i=1,...,N, a specified fraction Fi of the original segment is contained in the parallel segment i, and where the sum for i=1,...,N of Fi is equal to 1. In this modality, there can be a main segment map which is used to derive the segment maps for all parallel segments, similar to how the Repair segment map is derived from the source segment map in the modality described above. For example, the main segment map can indicate the fragment structure if all the source media data is not split into parallel segments, but instead contained in the original segment, and then the segment map for the parallel segment i can be derived from the main segment map by calculating that if the amount of media data in a first fragment prefix of the original segment equals L bytes, then the total number of bytes of that prefix in the aggregate between the first parallel segment i is ceiling(L*Gi), where Gi is the sum through j=1,...,i of Fj. As another example of an alternative modality, segments can consist of combining the original source media data for each fragment followed immediately by repair data for that fragment, resulting in a segment that contains a mixture of source media data and repair data generated using a FEC code from this source media data. As another example of an alternative embodiment, a segment containing a mixture of source media data and repair data can be divided into multiple parallel segments containing a mixture of source media data and repair data. [00453] Additional modalities may be envisioned by those skilled in the art after reading this description. In other embodiments, combinations or sub-combinations of the invention described above can be advantageously created. Illustrative arrangements of the components are illustrated for purposes of illustration and it is to be understood that combinations, additions, novel arrangements and the like are contemplated in alternative embodiments of the present invention. Thus, while the invention has been described with respect to illustrative embodiments, those skilled in the art will recognize that numerous modifications are possible. [00454] For example, the processes described here can be implemented using hardware components, software components, and/or any combination thereof. In some cases, software components may be provided on tangible non-transitive media for execution on hardware that is provided with media or is separate from media. The specification and drawings are, accordingly, considered to be illustrative rather than restrictive. It will, however, be evident that various modifications and changes can be made here without departing from the spirit and broader scope of the invention as set out in the claims and that the invention shall cover all modifications and equivalences within the scope of the following claims.
权利要求:
Claims (9) [0001] 1. Method for use in a communication system in which a client device requests media files from a media ingestion system, the method CHARACTERIZED by comprising: providing, in the media ingestion system, multiple layers (1204, 1206, 1208) of media data within media files, wherein a version of media content may be constructed from a multi-layer subset (1204, 1206, 1208); and providing metadata (1202) to enable building requests for the layers (1204, 1206, 1208) of media data; where layers comprising a block are stored within a single file and metadata is provided specifying byte ranges within the file corresponding to the individual layers. [0002] Method according to claim 1, CHARACTERIZED in that layers are generated using a technique described in ITU-T Standard H.264/SVC or ITU-T Standard H.264/AVC. [0003] 3. Method according to claim 1, CHARACTERIZED by the metadata being provided in a media presentation description, MPD, or as a part of a media file. [0004] 4. Apparatus for use in a communication system in which a client device requests media files from a media ingest system, the apparatus CHARACTERIZED by comprising: means to provide, in the media ingest system, multiple layers ( 1204, 1206, 1208) of media data within media files, wherein a version of media content may be constructed from a multi-layer subset (1204, 1206, 1208); and means for providing metadata (1202) to enable construction of requests to the layers (1204, 1206, 1208) of media data; where layers comprising a block are stored within a single file and metadata is provided specifying byte ranges within the file corresponding to the individual layers. [0005] 5. Method for use in a communication system where a client device requests media blocks from a media ingest system, the method CHARACTERIZED by comprising: receiving, at the client device, multiple layers (1204, 1206, 1208) of media data within media files, wherein a version of media content can be constructed from a multi-layer subset (1204, 1206, 1208); and receiving metadata (1202) to enable building requests for the layers (1204, 1206, 1208) of media data; where layers comprising a block are stored within a single file and metadata is provided specifying byte ranges within the file corresponding to the individual layers. [0006] 6. Method according to claim 5, CHARACTERIZED in that layers are generated using a technique described in ITU-T Standard H.264/SVC or ITU-T Standard H.264/AVC. [0007] 7. Method according to claim 5, CHARACTERIZED by the metadata being provided in a media presentation description, MPD, or as a part of a media file. [0008] 8. Apparatus for use in a communication system in which a client device requests blocks of media from a media ingest system, the apparatus CHARACTERIZED by comprising: means for receiving, on the client device, multiple layers (1204, 1206, 1208) of media data within media files, wherein a version of media content may be constructed from a multi-layer subset (1204, 1206, 1208); and means for receiving metadata (1202) to enable construction of requests for the layers (1204, 1206, 1208) of media data; where layers comprising a block are stored within a single file and metadata is provided specifying byte ranges within the file corresponding to the individual layers. [0009] 9. Memory CHARACTERIZED by comprising instructions which, when executed, cause at least one computer to perform a method as defined in any one of claims 1 to 3 or 5 to 7.
类似技术:
公开号 | 公开日 | 专利标题 US10855736B2|2020-12-01|Enhanced block-request streaming using block partitioning or request controls for improved client-side handling JP2019205175A|2019-11-28|Enhanced block-request streaming system using signaling or block creation US9628536B2|2017-04-18|Enhanced block-request streaming using cooperative parallel HTTP and forward error correction BR112012006377B1|2021-05-18|Enhanced Block Request Streaming Using Scalable Encoding US9386064B2|2016-07-05|Enhanced block-request streaming using URL templates and construction rules US20130007223A1|2013-01-03|Enhanced block-request streaming system for handling low-latency streaming IL234872A|2017-05-29|Block-request streaming system for handling low-latency streaming BR112014026741B1|2021-10-26|METHOD FOR STRUCTURING THE CONTENT DATA TO BE SERVED USING A MEDIA SERVER, MEDIA SERVER AND COMPUTER-READABLE MEMORY
同族专利:
公开号 | 公开日 HUE042143T2|2019-06-28| EP2481198A1|2012-08-01| US20110096828A1|2011-04-28| JP2013505681A|2013-02-14| BR112012006377A2|2016-04-05| RU2523918C2|2014-07-27| CN106209892A|2016-12-07| CN110072117A|2019-07-30| EP2481198B1|2018-11-14| CN106209892B|2019-06-21| JP5722331B2|2015-05-20| SI2481198T1|2019-02-28| CN108322769A|2018-07-24| CN102577308A|2012-07-11| ZA201202936B|2012-12-27| RU2012116083A|2013-10-27| CA2774925A1|2011-03-31| CN108322769B|2021-01-05| CN110072117B|2022-03-08| HK1256233A1|2019-09-20| KR20120069746A|2012-06-28| DK2481198T3|2019-02-18| KR101395200B1|2014-05-15| WO2011038021A1|2011-03-31| CA2774925C|2015-06-16| ES2711374T3|2019-05-03|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US575463A|1897-01-19|John dawson and robert southworth dawson | US4589112A|1984-01-26|1986-05-13|International Business Machines Corporation|System for multiple error detection with single and double bit error correction| US4901319A|1988-03-18|1990-02-13|General Electric Company|Transmission system with adaptive interleaving| US5421031A|1989-08-23|1995-05-30|Delta Beta Pty. Ltd.|Program transmission optimisation| US7594250B2|1992-04-02|2009-09-22|Debey Henry C|Method and system of program transmission optimization using a redundant transmission sequence| US5379297A|1992-04-09|1995-01-03|Network Equipment Technologies, Inc.|Concurrent multi-channel segmentation and reassembly processors for asynchronous transfer mode| JP2576776B2|1993-11-10|1997-01-29|日本電気株式会社|Packet transmission method and packet transmission device| US5517508A|1994-01-26|1996-05-14|Sony Corporation|Method and apparatus for detection and error correction of packetized digital data| US5757415A|1994-05-26|1998-05-26|Sony Corporation|On-demand data transmission by dividing input data into blocks and each block into sub-blocks such that the sub-blocks are re-arranged for storage to data storage means| US5617541A|1994-12-21|1997-04-01|International Computer Science Institute|System for packetizing data encoded corresponding to priority levels where reconstructed data corresponds to fractionalized priority level and received fractionalized packets| US5751336A|1995-10-12|1998-05-12|International Business Machines Corporation|Permutation based pyramid block transmission scheme for broadcasting in video-on-demand storage systems| US6012159A|1996-01-17|2000-01-04|Kencast, Inc.|Method and system for error-free data transfer| US5903775A|1996-06-06|1999-05-11|International Business Machines Corporation|Method for the sequential transmission of compressed video information at varying data rates| US6044485A|1997-01-03|2000-03-28|Ericsson Inc.|Transmitter method and transmission system using adaptive coding based on channel characteristics| US6011590A|1997-01-03|2000-01-04|Ncr Corporation|Method of transmitting compressed information to minimize buffer space| US6014706A|1997-01-30|2000-01-11|Microsoft Corporation|Methods and apparatus for implementing control functions in a streamed video display system| US6175944B1|1997-07-15|2001-01-16|Lucent Technologies Inc.|Methods and apparatus for packetizing data for transmission through an erasure broadcast channel| US6904110B2|1997-07-31|2005-06-07|Francois Trans|Channel equalization system and method| US6178536B1|1997-08-14|2001-01-23|International Business Machines Corporation|Coding scheme for file backup and systems based thereon| FR2767940A1|1997-08-29|1999-03-05|Canon Kk|CODING AND DECODING METHODS AND DEVICES AND APPARATUSES IMPLEMENTING THE SAME| US6195777B1|1997-11-06|2001-02-27|Compaq Computer Corporation|Loss resilient code with double heavy tailed series of redundant layers| US5870412A|1997-12-12|1999-02-09|3Com Corporation|Forward error correction system for packet based real time media| US6849803B1|1998-01-15|2005-02-01|Arlington Industries, Inc.|Electrical connector| US6185265B1|1998-04-07|2001-02-06|Worldspace Management Corp.|System for time division multiplexing broadcast channels with R-1/2 or R-3/4 convolutional coding for satellite transmission via on-board baseband processing payload or transparent payload| US6067646A|1998-04-17|2000-05-23|Ameritech Corporation|Method and system for adaptive interleaving| US6018359A|1998-04-24|2000-01-25|Massachusetts Institute Of Technology|System and method for multicast video-on-demand delivery system| US6307487B1|1998-09-23|2001-10-23|Digital Fountain, Inc.|Information additive code generator and decoder for communication systems| US6704370B1|1998-10-09|2004-03-09|Nortel Networks Limited|Interleaving methodology and apparatus for CDMA| US6876623B1|1998-12-02|2005-04-05|Agere Systems Inc.|Tuning scheme for code division multiplex broadcasting system| US6496980B1|1998-12-07|2002-12-17|Intel Corporation|Method of providing replay on demand for streaming digital multimedia| US6223324B1|1999-01-05|2001-04-24|Agere Systems Guardian Corp.|Multiple program unequal error protection for digital audio broadcasting and other applications| US6041001A|1999-02-25|2000-03-21|Lexar Media, Inc.|Method of increasing data reliability of a flash memory device without compromising compatibility| US6535920B1|1999-04-06|2003-03-18|Microsoft Corporation|Analyzing, indexing and seeking of streaming information| US6229824B1|1999-05-26|2001-05-08|Xm Satellite Radio Inc.|Method and apparatus for concatenated convolutional endcoding and interleaving| JP4284774B2|1999-09-07|2009-06-24|ソニー株式会社|Transmission device, reception device, communication system, transmission method, and communication method| US6523147B1|1999-11-11|2003-02-18|Ibiquity Digital Corporation|Method and apparatus for forward error correction coding for an AM in-band on-channel digital audio broadcasting system| US6785323B1|1999-11-22|2004-08-31|Ipr Licensing, Inc.|Variable rate coding for forward link| US6678855B1|1999-12-02|2004-01-13|Microsoft Corporation|Selecting K in a data transmission carousel using forward error correction| US6851086B2|2000-03-31|2005-02-01|Ted Szymanski|Transmitter, receiver, and coding scheme to increase data rate and decrease bit error rate of an optical data link| US6742154B1|2000-05-25|2004-05-25|Ciena Corporation|Forward error correction codes for digital optical network optimization| US6694476B1|2000-06-02|2004-02-17|Vitesse Semiconductor Corporation|Reed-solomon encoder and decoder| US6732325B1|2000-11-08|2004-05-04|Digeo, Inc.|Error-correction with limited working storage| US7072971B2|2000-11-13|2006-07-04|Digital Foundation, Inc.|Scheduling of multiple files for serving on a server| US7240358B2|2000-12-08|2007-07-03|Digital Fountain, Inc.|Methods and apparatus for scheduling, serving, receiving media-on demand for clients, servers arranged according to constraints on resources| US6850736B2|2000-12-21|2005-02-01|Tropian, Inc.|Method and apparatus for reception quality indication in wireless communication| NO315887B1|2001-01-04|2003-11-03|Fast Search & Transfer As|Procedures for transmitting and socking video information| US20080059532A1|2001-01-18|2008-03-06|Kazmi Syed N|Method and system for managing digital content, including streaming media| US6868083B2|2001-02-16|2005-03-15|Hewlett-Packard Development Company, L.P.|Method and system for packet communication employing path diversity| US7010052B2|2001-04-16|2006-03-07|The Ohio University|Apparatus and method of CTCM encoding and decoding for a digital communication system| US7035468B2|2001-04-20|2006-04-25|Front Porch Digital Inc.|Methods and apparatus for archiving, indexing and accessing audio and video data| WO2002087214A2|2001-04-20|2002-10-31|Radio Computing Services, Inc.|Demand-based goal-driven scheduling system| US7962482B2|2001-05-16|2011-06-14|Pandora Media, Inc.|Methods and systems for utilizing contextual feedback to generate and modify playlists| US6895547B2|2001-07-11|2005-05-17|International Business Machines Corporation|Method and apparatus for low density parity check encoding of data| US6961890B2|2001-08-16|2005-11-01|Hewlett-Packard Development Company, L.P.|Dynamic variable-length error correction code| US7003712B2|2001-11-29|2006-02-21|Emin Martinian|Apparatus and method for adaptive, multimode decoding| US7068729B2|2001-12-21|2006-06-27|Digital Fountain, Inc.|Multi-stage code generator and decoder for communication systems| CN1625880B|2002-01-30|2010-08-11|Nxp股份有限公司|Streaming multimedia data over a network having a variable bandwith| US6677864B2|2002-04-18|2004-01-13|Telefonaktiebolaget L.M. Ericsson|Method for multicast over wireless networks| US9288010B2|2009-08-19|2016-03-15|Qualcomm Incorporated|Universal file delivery methods for providing unequal error protection and bundled file delivery services| ES2445761T3|2002-06-11|2014-03-05|Digital Fountain, Inc.|Decoding chain reaction codes by inactivation| US7289451B2|2002-10-25|2007-10-30|Telefonaktiebolaget Lm Ericsson |Delay trading between communication links| JP2004192140A|2002-12-09|2004-07-08|Sony Corp|Data communication system, data transmitting device, data receiving device and method, and computer program| US8135073B2|2002-12-19|2012-03-13|Trident Microsystems Ltd|Enhancing video images depending on prior image enhancements| US7525994B2|2003-01-30|2009-04-28|Avaya Inc.|Packet data flow identification for multiplexing| EP1455504B1|2003-03-07|2014-11-12|Samsung Electronics Co., Ltd.|Apparatus and method for processing audio signal and computer readable recording medium storing computer program for the method| US7610487B2|2003-03-27|2009-10-27|Microsoft Corporation|Human input security codes| US20050041736A1|2003-05-07|2005-02-24|Bernie Butler-Smith|Stereoscopic television signal processing method, transmission system and viewer enhancements| MXPA05013237A|2003-06-07|2006-03-09|Samsung Electronics Co Ltd|Apparatus and method for organization and interpretation of multimedia data on a recording medium.| US20050028067A1|2003-07-31|2005-02-03|Weirauch Charles R.|Data with multiple sets of error correction codes| IL157886D0|2003-09-11|2009-02-11|Bamboo Mediacasting Ltd|Secure multicast transmission| US7516232B2|2003-10-10|2009-04-07|Microsoft Corporation|Media organization for distributed sending of media data| CN100555213C|2003-10-14|2009-10-28|松下电器产业株式会社|Data converter| US7650036B2|2003-10-16|2010-01-19|Sharp Laboratories Of America, Inc.|System and method for three-dimensional video coding| US7168030B2|2003-10-17|2007-01-23|Telefonaktiebolaget Lm Ericsson |Turbo code decoder with parity information update| US20050102371A1|2003-11-07|2005-05-12|Emre Aksu|Streaming from a server to a client| US7609653B2|2004-03-08|2009-10-27|Microsoft Corporation|Resolving partial media topologies| JP4433287B2|2004-03-25|2010-03-17|ソニー株式会社|Receiving apparatus and method, and program| TW200534875A|2004-04-23|2005-11-01|Lonza Ag|Personal care compositions and concentrates for making the same| EP1743431A4|2004-05-07|2007-05-02|Digital Fountain Inc|File download and streaming system| US20060037057A1|2004-05-24|2006-02-16|Sharp Laboratories Of America, Inc.|Method and system of enabling trick play modes using HTTP GET| JP4405875B2|2004-08-25|2010-01-27|富士通株式会社|Method and apparatus for generating data for error correction, generation program, and computer-readable recording medium storing the program| US7529984B2|2004-11-16|2009-05-05|Infineon Technologies Ag|Seamless change of depth of a general convolutional interleaver during transmission without loss of data| US7751324B2|2004-11-19|2010-07-06|Nokia Corporation|Packet stream arrangement in multimedia transmission| JP2006174045A|2004-12-15|2006-06-29|Ntt Communications Kk|Image distribution device, program, and method therefor| US7644335B2|2005-06-10|2010-01-05|Qualcomm Incorporated|In-place transformations with applications to encoding and decoding various classes of codes| US7676735B2|2005-06-10|2010-03-09|Digital Fountain Inc.|Forward error-correcting coding and streaming| JP2007013436A|2005-06-29|2007-01-18|Toshiba Corp|Coding stream reproducing apparatus| DE102005032080A1|2005-07-08|2007-01-11|Siemens Ag|A method for transmitting a media data stream and a method for receiving and creating a reconstructed media data stream, and associated transmitting device and receiving device| US7725593B2|2005-07-15|2010-05-25|Sony Corporation|Scalable video coding file format| US20070186005A1|2005-09-01|2007-08-09|Nokia Corporation|Method to embedding SVG content into ISO base media file format for progressive downloading and streaming of rich media content| CN101053249B|2005-09-09|2011-02-16|松下电器产业株式会社|Image processing method, image storage method, image processing device and image file format| US20070078876A1|2005-09-30|2007-04-05|Yahoo! Inc.|Generating a stream of media data containing portions of media files using location tags| US7164370B1|2005-10-06|2007-01-16|Analog Devices, Inc.|System and method for decoding data compressed in accordance with dictionary-based compression schemes| CN100442858C|2005-10-11|2008-12-10|华为技术有限公司|Lip synchronous method for multimedia real-time transmission in packet network and apparatus thereof| JP4727401B2|2005-12-02|2011-07-20|日本電信電話株式会社|Wireless multicast transmission system, wireless transmission device, and wireless multicast transmission method| KR101353620B1|2006-01-05|2014-01-20|텔레폰악티에볼라겟엘엠에릭슨|Media container file management| BRPI0707457A2|2006-01-11|2011-05-03|Nokia Corp|inverse compatible aggregation of images in resizable video encoding| TWM302355U|2006-06-09|2006-12-11|Jia-Bau Jeng|Fixation and cushion structure of knee joint| US8209736B2|2006-08-23|2012-06-26|Mediatek Inc.|Systems and methods for managing television signals| WO2008033060A1|2006-09-15|2008-03-20|Obigo Ab|Method and device for controlling a multimedia presentation device| WO2008084876A1|2007-01-11|2008-07-17|Panasonic Corporation|Method for trick playing on streamed and encrypted multimedia| EP2174502A2|2007-06-26|2010-04-14|Nokia Corporation|System and method for indicating temporal layer switching points| JP2009027598A|2007-07-23|2009-02-05|Hitachi Ltd|Video distribution server and video distribution method| US8683066B2|2007-08-06|2014-03-25|DISH Digital L.L.C.|Apparatus, system, and method for multi-bitrate content streaming| US9237101B2|2007-09-12|2016-01-12|Digital Fountain, Inc.|Generating and communicating source identification information to enable reliable communications| US8346959B2|2007-09-28|2013-01-01|Sharp Laboratories Of America, Inc.|Client-controlled adaptive streaming| CN101409630A|2007-10-11|2009-04-15|北京大学|Method, apparatus and system for sending and receiving stream medium data| US8635360B2|2007-10-19|2014-01-21|Google Inc.|Media playback point seeking using data range requests| US20090125636A1|2007-11-13|2009-05-14|Qiong Li|Payload allocation methods for scalable multimedia servers| WO2009127961A1|2008-04-16|2009-10-22|Nokia Corporation|Decoding order recovery in session multiplexing| WO2009130561A1|2008-04-21|2009-10-29|Nokia Corporation|Method and device for video coding and decoding| US20100011274A1|2008-06-12|2010-01-14|Qualcomm Incorporated|Hypothetical fec decoder and signalling for decoding control| US8265140B2|2008-09-30|2012-09-11|Microsoft Corporation|Fine-grained client-side control of scalable media delivery| US8370520B2|2008-11-24|2013-02-05|Juniper Networks, Inc.|Adaptive network content delivery system| US9807468B2|2009-06-16|2017-10-31|Microsoft Technology Licensing, Llc|Byte range caching| US9438861B2|2009-10-06|2016-09-06|Microsoft Technology Licensing, Llc|Integrating continuous and sparse streaming data| US8918533B2|2010-07-13|2014-12-23|Qualcomm Incorporated|Video switching for streaming video data| US9185439B2|2010-07-15|2015-11-10|Qualcomm Incorporated|Signaling data for multiplexing video components| US9596447B2|2010-07-21|2017-03-14|Qualcomm Incorporated|Providing frame packing type information for video coding| US9456015B2|2010-08-10|2016-09-27|Qualcomm Incorporated|Representation groups for network streaming of coded multimedia data|US6307487B1|1998-09-23|2001-10-23|Digital Fountain, Inc.|Information additive code generator and decoder for communication systems| US7068729B2|2001-12-21|2006-06-27|Digital Fountain, Inc.|Multi-stage code generator and decoder for communication systems| US9419749B2|2009-08-19|2016-08-16|Qualcomm Incorporated|Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes| US9240810B2|2002-06-11|2016-01-19|Digital Fountain, Inc.|Systems and processes for decoding chain reaction codes through inactivation| US9288010B2|2009-08-19|2016-03-15|Qualcomm Incorporated|Universal file delivery methods for providing unequal error protection and bundled file delivery services| AU2003277198A1|2002-10-05|2004-05-04|Digital Fountain, Inc.|Systematic encoding and decoding of chain reaction codes| US9420072B2|2003-04-25|2016-08-16|Z124|Smartphone databoost| EP2722995A3|2003-10-06|2018-01-17|Digital Fountain, Inc.|Soft-decision decoding of multi-stage chain reaction codes| US7519274B2|2003-12-08|2009-04-14|Divx, Inc.|File format for multiple track digital data| US8472792B2|2003-12-08|2013-06-25|Divx, Llc|Multimedia distribution system| EP1743431A4|2004-05-07|2007-05-02|Digital Fountain Inc|File download and streaming system| US9432433B2|2006-06-09|2016-08-30|Qualcomm Incorporated|Enhanced block-request streaming system using signaling or block creation| US9386064B2|2006-06-09|2016-07-05|Qualcomm Incorporated|Enhanced block-request streaming using URL templates and construction rules| US9178535B2|2006-06-09|2015-11-03|Digital Fountain, Inc.|Dynamic stream interleaving and sub-stream based delivery| US9380096B2|2006-06-09|2016-06-28|Qualcomm Incorporated|Enhanced block-request streaming system for handling low-latency streaming| US9209934B2|2006-06-09|2015-12-08|Qualcomm Incorporated|Enhanced block-request streaming using cooperative parallel HTTP and forward error correction| US8074248B2|2005-07-26|2011-12-06|Activevideo Networks, Inc.|System and method for providing video content associated with a source image to a television in a communication network| KR101292851B1|2006-02-13|2013-08-02|디지털 파운튼, 인크.|Streaming and buffering using variable fec overhead and protection periods| US9270414B2|2006-02-21|2016-02-23|Digital Fountain, Inc.|Multiple-field based code generator and decoder for communications systems| US7515710B2|2006-03-14|2009-04-07|Divx, Inc.|Federated digital rights management scheme including trusted systems| US7971129B2|2006-05-10|2011-06-28|Digital Fountain, Inc.|Code generator and decoder for communications systems operating using hybrid codes to allow for multiple efficient users of the communications systems| WO2008086313A1|2007-01-05|2008-07-17|Divx, Inc.|Video distribution system including progressive playback| EP2632164A3|2007-01-12|2014-02-26|ActiveVideo Networks, Inc.|Interactive encoded content system including object models for viewing on a remote device| US9826197B2|2007-01-12|2017-11-21|Activevideo Networks, Inc.|Providing television broadcasts over a managed network and interactive content over an unmanaged network to a client device| US9237101B2|2007-09-12|2016-01-12|Digital Fountain, Inc.|Generating and communicating source identification information to enable reliable communications| US8578272B2|2008-12-31|2013-11-05|Apple Inc.|Real-time or near real-time streaming| US8156089B2|2008-12-31|2012-04-10|Apple, Inc.|Real-time or near real-time streaming with compressed playlists| US8099476B2|2008-12-31|2012-01-17|Apple Inc.|Updatable real-time or near real-time streaming| US8260877B2|2008-12-31|2012-09-04|Apple Inc.|Variant streams for real-time or near real-time streaming to provide failover protection| WO2010080911A1|2009-01-07|2010-07-15|Divx, Inc.|Singular, collective and automated creation of a media guide for online content| US9281847B2|2009-02-27|2016-03-08|Qualcomm Incorporated|Mobile reception of digital video broadcasting—terrestrial services| US9485299B2|2009-03-09|2016-11-01|Arris Canada, Inc.|Progressive download gateway| US8566393B2|2009-08-10|2013-10-22|Seawell Networks Inc.|Methods and systems for scalable video chunking| US9917874B2|2009-09-22|2018-03-13|Qualcomm Incorporated|Enhanced block-request streaming using block partitioning or request controls for improved client-side handling| US8392748B2|2009-10-06|2013-03-05|Microsoft Corporation|Reliable media streaming| US9438861B2|2009-10-06|2016-09-06|Microsoft Technology Licensing, Llc|Integrating continuous and sparse streaming data| US8635359B2|2009-11-06|2014-01-21|Telefonaktiebolaget L M Ericsson |File format for synchronized media| EP2507995A4|2009-12-04|2014-07-09|Sonic Ip Inc|Elementary bitstream cryptographic material transport systems and methods| CA2786812C|2010-01-18|2018-03-20|Telefonaktiebolaget L M Ericsson |Method and arrangement for supporting playout of content| US8805963B2|2010-04-01|2014-08-12|Apple Inc.|Real-time or near real-time streaming| US8560642B2|2010-04-01|2013-10-15|Apple Inc.|Real-time or near real-time streaming| GB201105502D0|2010-04-01|2011-05-18|Apple Inc|Real time or near real time streaming| JP2014509109A|2011-01-11|2014-04-10|アップルインコーポレイテッド|Real-time or near real-time streaming| CN102238179B|2010-04-07|2014-12-10|苹果公司|Real-time or near real-time streaming| US9253548B2|2010-05-27|2016-02-02|Adobe Systems Incorporated|Optimizing caches for media streaming| US9049497B2|2010-06-29|2015-06-02|Qualcomm Incorporated|Signaling random access points for streaming video data| US8918533B2|2010-07-13|2014-12-23|Qualcomm Incorporated|Video switching for streaming video data| US9185439B2|2010-07-15|2015-11-10|Qualcomm Incorporated|Signaling data for multiplexing video components| KR20120008432A|2010-07-16|2012-01-30|한국전자통신연구원|Method and apparatus for transmitting/receiving streaming service| KR20120034550A|2010-07-20|2012-04-12|한국전자통신연구원|Apparatus and method for providing streaming contents| US9596447B2|2010-07-21|2017-03-14|Qualcomm Incorporated|Providing frame packing type information for video coding| EP2604031B1|2010-08-10|2017-03-08|Google Technology Holdings LLC|Method and apparatus for streaming media content using variable duration media segments| US9456015B2|2010-08-10|2016-09-27|Qualcomm Incorporated|Representation groups for network streaming of coded multimedia data| CN102130936B|2010-08-17|2013-10-09|华为技术有限公司|Method and device for supporting time shifting and look back in dynamic hyper text transport protocolstreaming transmission scheme| US9467493B2|2010-09-06|2016-10-11|Electronics And Telecommunication Research Institute|Apparatus and method for providing streaming content| US8788576B2|2010-09-27|2014-07-22|Z124|High speed parallel data exchange with receiver side data handling| US8751682B2|2010-09-27|2014-06-10|Z124|Data transfer using high speed connection, high integrity connection, and descriptor| CN102148851B|2010-09-30|2014-09-17|华为技术有限公司|Method and device for applying parental controls in adaptive hyper text transport protocolstreaming transmission| JP5866125B2|2010-10-14|2016-02-17|アクティブビデオ ネットワークス, インコーポレイテッド|Digital video streaming between video devices using a cable TV system| US9282135B2|2010-10-29|2016-03-08|Israel L'Heureux|Enhanced computer networking via multi-connection object retrieval| US8468262B2|2010-11-01|2013-06-18|Research In Motion Limited|Method and apparatus for updating http content descriptions| MY168733A|2010-11-02|2018-11-29|Ericsson Telefon Ab L M|Methods and devices for media description delivery| US8914534B2|2011-01-05|2014-12-16|Sonic Ip, Inc.|Systems and methods for adaptive bitrate streaming of media stored in matroska container files using hypertext transfer protocol| KR101739272B1|2011-01-18|2017-05-24|삼성전자주식회사|Apparatus and method for storing and playing contents in multimedia streaming system| US8898718B2|2011-01-27|2014-11-25|International Business Machines Corporation|Systems and methods for managed video services at edge-of-the-network| US8958375B2|2011-02-11|2015-02-17|Qualcomm Incorporated|Framing for an improved radio link protocol including FEC| US9270299B2|2011-02-11|2016-02-23|Qualcomm Incorporated|Encoding and decoding using elastic codes with flexible source block mapping| US8849950B2|2011-04-07|2014-09-30|Qualcomm Incorporated|Network streaming of video data using byte range requests| WO2012138660A2|2011-04-07|2012-10-11|Activevideo Networks, Inc.|Reduction of latency in video distribution networks using adaptive bit rates| EP2697967B1|2011-04-15|2020-08-19|Performance and Privacy Ireland Ltd.|Real-time video detector| US8856283B2|2011-06-03|2014-10-07|Apple Inc.|Playlists for real-time or near real-time streaming| US8843586B2|2011-06-03|2014-09-23|Apple Inc.|Playlists for real-time or near real-time streaming| HUE042122T2|2011-06-08|2019-06-28|Koninklijke Kpn Nv|Locating and retrieving segmented content| US9600350B2|2011-06-16|2017-03-21|Vmware, Inc.|Delivery of a user interface using hypertext transfer protocol| US8812662B2|2011-06-29|2014-08-19|Sonic Ip, Inc.|Systems and methods for estimating available bandwidth and performing initial stream selection when streaming content| JP2013021574A|2011-07-12|2013-01-31|Sharp Corp|Generation device, distribution server, generation method, reproduction device, reproduction method, reproduction system, generation program, reproduction program, recording medium, and data structure| US9553817B1|2011-07-14|2017-01-24|Sprint Communications Company L.P.|Diverse transmission of packet content| US9590814B2|2011-08-01|2017-03-07|Qualcomm Incorporated|Method and apparatus for transport of dynamic adaptive streaming over HTTPinitialization segment description fragments as user service description fragments| US9514242B2|2011-08-29|2016-12-06|Vmware, Inc.|Presenting dynamically changing images in a limited rendering environment| US9955195B2|2011-08-30|2018-04-24|Divx, Llc|Systems and methods for encoding and streaming video encoded using a plurality of maximum bitrate levels| US8799647B2|2011-08-31|2014-08-05|Sonic Ip, Inc.|Systems and methods for application identification| US8806188B2|2011-08-31|2014-08-12|Sonic Ip, Inc.|Systems and methods for performing adaptive bitrate streaming using automatically generated top level index files| US9253233B2|2011-08-31|2016-02-02|Qualcomm Incorporated|Switch signaling methods providing improved switching between representations for adaptive HTTP streaming| US8909922B2|2011-09-01|2014-12-09|Sonic Ip, Inc.|Systems and methods for playing back alternative streams of protected content protected using common cryptographic information| US8964977B2|2011-09-01|2015-02-24|Sonic Ip, Inc.|Systems and methods for saving encoded media streamed using adaptive bitrate streaming| EP2754282A1|2011-09-06|2014-07-16|Telefonaktiebolaget LM Ericsson |Device and method for progressive media download with multiple layers or streams| US9357275B2|2011-09-06|2016-05-31|Qualcomm Incorporated|Network streaming of coded video data| US10136165B2|2011-09-14|2018-11-20|Mobitv, Inc.|Distributed scalable encoder resources for live streams| US8842057B2|2011-09-27|2014-09-23|Z124|Detail on triggers: transitional states| US9774721B2|2011-09-27|2017-09-26|Z124|LTE upgrade module| US9843844B2|2011-10-05|2017-12-12|Qualcomm Incorporated|Network streaming of media data| US9282354B2|2011-10-28|2016-03-08|Qualcomm Incorporated|Method and apparatus to detect a demand for and to establish demand-based multimedia broadcast multicast service| US8726264B1|2011-11-02|2014-05-13|Amazon Technologies, Inc.|Architecture for incremental deployment| US9229740B1|2011-11-02|2016-01-05|Amazon Technologies, Inc.|Cache-assisted upload proxy| US8984162B1|2011-11-02|2015-03-17|Amazon Technologies, Inc.|Optimizing performance for routing operations| US10397294B2|2011-12-15|2019-08-27|Dolby Laboratories Licensing Corporation|Bandwidth adaptation for dynamic adaptive transferring of multimedia| US9473346B2|2011-12-23|2016-10-18|Firebind, Inc.|System and method for network path validation| US10218756B2|2012-01-06|2019-02-26|Comcast Cable Communications, Llc|Streamlined delivery of video content| US10409445B2|2012-01-09|2019-09-10|Activevideo Networks, Inc.|Rendering of an interactive lean-backward user interface on a television| US8850054B2|2012-01-17|2014-09-30|International Business Machines Corporation|Hypertext transfer protocol live streaming| CN104040993A|2012-01-17|2014-09-10|瑞典爱立信有限公司|Method for sending respectively receiving media stream| WO2013108954A1|2012-01-20|2013-07-25|전자부품연구원|Method for transmitting and receiving program configuration information for scalable ultra high definition video service in hybrid transmission environment, and method and apparatus for effectively transmitting scalar layer information| US20130188922A1|2012-01-23|2013-07-25|Research In Motion Limited|Multimedia File Support for Media Capture Device Position and Location Timed Metadata| US9374406B2|2012-02-27|2016-06-21|Qualcomm Incorporated|Dash client and receiver with a download rate estimator| US9450997B2|2012-02-27|2016-09-20|Qualcomm Incorporated|Dash client and receiver with request cancellation capabilities| US9294226B2|2012-03-26|2016-03-22|Qualcomm Incorporated|Universal object delivery and template-based file delivery| US9276989B2|2012-03-30|2016-03-01|Adobe Systems Incorporated|Buffering in HTTP streaming client| US9123084B2|2012-04-12|2015-09-01|Activevideo Networks, Inc.|Graphical application integration with MPEG objects| PL2842313T3|2012-04-13|2017-06-30|Ge Video Compression, Llc|Scalable data stream and network entity| US9235867B2|2012-06-04|2016-01-12|Microsoft Technology Licensing, Llc|Concurrent media delivery| US10063606B2|2012-06-12|2018-08-28|Taiwan Semiconductor Manufacturing Co., Ltd.|Systems and methods for using client-side video buffer occupancy for enhanced quality of experience in a communication network| US9380091B2|2012-06-12|2016-06-28|Wi-Lan Labs, Inc.|Systems and methods for using client-side video buffer occupancy for enhanced quality of experience in a communication network| WO2014001573A1|2012-06-29|2014-01-03|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Video data stream concept| US9936267B2|2012-08-31|2018-04-03|Divx Cf Holdings Llc|System and method for decreasing an initial buffering period of an adaptive streaming system| US9992260B1|2012-08-31|2018-06-05|Fastly Inc.|Configuration change processing for content request handling in content delivery node| US8856618B2|2012-10-04|2014-10-07|HGST Netherlands B.V.|Scalable repair block error correction for sequential multiple data blocks in a magnetic data storage device| EP2908535A4|2012-10-09|2016-07-06|Sharp Kk|Content transmission device, content playback device, content distribution system, method for controlling content transmission device, method for controlling content playback device, control program, and recording medium| TWI528798B|2012-10-11|2016-04-01|緯創資通股份有限公司|Streaming data downloading method and computer readable recording medium thereof| US9015212B2|2012-10-16|2015-04-21|Rackspace Us, Inc.|System and method for exposing cloud stored data to a content delivery network| CN103780741B|2012-10-18|2018-03-13|腾讯科技(深圳)有限公司|Prompt the method and mobile device of network speed| JP6236459B2|2012-10-19|2017-11-22|インターデイジタル パテント ホールディングス インコーポレイテッド|Multiple hypothesis rate adaptation for HTTP streaming| US20140142955A1|2012-11-19|2014-05-22|Apple Inc.|Encoding Digital Media for Fast Start on Digital Media Players| US8868964B2|2012-11-20|2014-10-21|Adobe Systems Incorporated|Method and apparatus for supporting failover for live streaming video| US9143543B2|2012-11-30|2015-09-22|Google Technology Holdings LLC|Method and system for multi-streaming multimedia data| US8904457B2|2012-12-28|2014-12-02|Microsoft Corporation|Archiving a live media presentation| US9344472B2|2012-12-28|2016-05-17|Microsoft Technology Licensing, Llc|Seamlessly playing a composite media presentation| US10735486B2|2012-12-28|2020-08-04|Qualcomm Incorporated|Device timing adjustments and methods for supporting dash over broadcast| US10250655B2|2012-12-31|2019-04-02|DISH Technologies L.L.C.|Scheduling segment data delivery in an adaptive media stream to avoid stalling| US9313510B2|2012-12-31|2016-04-12|Sonic Ip, Inc.|Use of objective quality measures of streamed content to reduce streaming bandwidth| US9191457B2|2012-12-31|2015-11-17|Sonic Ip, Inc.|Systems, methods, and media for controlling delivery of content| US9426196B2|2013-01-04|2016-08-23|Qualcomm Incorporated|Live timing for dynamic adaptive streaming over HTTP | US20140267910A1|2013-03-13|2014-09-18|Samsung Electronics Co., Ltd.|Method of mirroring content from a mobile device onto a flat panel television, and a flat panel television| US9680901B2|2013-03-14|2017-06-13|Openwave Mobility, Inc.|Method, apparatus and non-transitory computer medium for encoding data of a media file| US9906785B2|2013-03-15|2018-02-27|Sonic Ip, Inc.|Systems, methods, and media for transcoding video data according to encoding parameters indicated by received metadata| WO2014145921A1|2013-03-15|2014-09-18|Activevideo Networks, Inc.|A multiple-mode system and method for providing user selectable video content| US10397292B2|2013-03-15|2019-08-27|Divx, Llc|Systems, methods, and media for delivery of content| US20140297804A1|2013-03-28|2014-10-02|Sonic IP. Inc.|Control of multimedia content streaming through client-server interactions| JP2014212456A|2013-04-18|2014-11-13|ソニー株式会社|Transmission apparatus, metafile transmission method, reception apparatus, and reception processing method| SG11201508357TA|2013-04-19|2015-11-27|Sony Corp|Server device, client device, content distribution method, and computer program| US10284612B2|2013-04-19|2019-05-07|Futurewei Technologies, Inc.|Media quality information signaling in dynamic adaptive video streaming over hypertext transfer protocol| US9521469B2|2013-04-19|2016-12-13|Futurewei Technologies, Inc.|Carriage of quality information of content in media formats| US9973559B2|2013-05-29|2018-05-15|Avago Technologies General IpPte. Ltd.|Systems and methods for presenting content streams to a client device| US9094737B2|2013-05-30|2015-07-28|Sonic Ip, Inc.|Network video streaming with trick play based on separate trick play files| US9380099B2|2013-05-31|2016-06-28|Sonic Ip, Inc.|Synchronizing multiple over the top streaming clients| US9100687B2|2013-05-31|2015-08-04|Sonic Ip, Inc.|Playback synchronization across playback devices| EP3005712A1|2013-06-06|2016-04-13|ActiveVideo Networks, Inc.|Overlay rendering of user interface onto source video| US9219922B2|2013-06-06|2015-12-22|Activevideo Networks, Inc.|System and method for exploiting scene graph information in construction of an encoded video sequence| US9294785B2|2013-06-06|2016-03-22|Activevideo Networks, Inc.|System and method for exploiting scene graph information in construction of an encoded video sequence| US20140366091A1|2013-06-07|2014-12-11|Amx, Llc|Customized information setup, access and sharing during a live conference| US9544352B2|2013-06-11|2017-01-10|Bitmovin Gmbh|Adaptation logic for varying a bitrate| US9967305B2|2013-06-28|2018-05-08|Divx, Llc|Systems, methods, and media for streaming media content| EP2962467A1|2013-07-19|2016-01-06|Huawei Technologies Co., Ltd.|Metadata information signaling and carriage in dynamic adaptive streaming over hypertext transfer protocol| IN2013MU02890A|2013-09-05|2015-07-03|Tata Consultancy Services Ltd| US9621616B2|2013-09-16|2017-04-11|Sony Corporation|Method of smooth transition between advertisement stream and main stream| US9401944B2|2013-10-22|2016-07-26|Qualcomm Incorporated|Layered adaptive HTTP streaming| US9286159B2|2013-11-06|2016-03-15|HGST Netherlands B.V.|Track-band squeezed-sector error correction in magnetic data storage devices| EP2890075B1|2013-12-26|2016-12-14|Telefonica Digital España, S.L.U.|A method and a system for smooth streaming of media content in a distributed content delivery network| US9386067B2|2013-12-30|2016-07-05|Sonic Ip, Inc.|Systems and methods for playing adaptive bitrate streaming content by multicast| US9229813B2|2014-03-06|2016-01-05|HGST Netherlands B.V.|Error correction with on-demand parity sectors in magnetic data storage devices| US9635077B2|2014-03-14|2017-04-25|Adobe Systems Incorporated|Low latency live video streaming| US9350484B2|2014-03-18|2016-05-24|Qualcomm Incorporated|Transport accelerator implementing selective utilization of redundant encoded content data functionality| RU2678323C2|2014-03-18|2019-01-28|Конинклейке Филипс Н.В.|Audiovisual content item data streams| WO2015150736A1|2014-03-31|2015-10-08|British Telecommunications Public Limited Company|Multicast streaming| US9866878B2|2014-04-05|2018-01-09|Sonic Ip, Inc.|Systems and methods for encoding and playing back video at different frame rates using enhancement layers| US9483310B2|2014-04-29|2016-11-01|Bluedata Software, Inc.|Associating cache memory with a work process| US9563846B2|2014-05-01|2017-02-07|International Business Machines Corporation|Predicting and enhancing document ingestion time| EP3160153B1|2014-06-20|2020-10-28|Sony Corporation|Reception device, reception method, transmission device, and transmission method| JP2017526228A|2014-08-07|2017-09-07|ソニック アイピー, インコーポレイテッド|System and method for protecting a base bitstream incorporating independently encoded tiles| EP3179729B1|2014-08-07|2021-08-25|Sony Group Corporation|Transmission device, transmission method and reception device| US9596285B2|2014-09-11|2017-03-14|Harman International Industries, Incorporated|Methods and systems for AVB networks| US10176157B2|2015-01-03|2019-01-08|International Business Machines Corporation|Detect annotation error by segmenting unannotated document segments into smallest partition| ES2874748T3|2015-01-06|2021-11-05|Divx Llc|Systems and methods for encoding and sharing content between devices| JP6588987B2|2015-02-27|2019-10-09|ソニック アイピー, インコーポレイテッド|System and method for frame copying and frame expansion in live video encoding and streaming| US10929353B2|2015-04-29|2021-02-23|Box, Inc.|File tree streaming in a virtual file system for cloud-based shared content| WO2016202885A1|2015-06-15|2016-12-22|Piksel, Inc|Processing content streaming| US10021187B2|2015-06-29|2018-07-10|Microsoft Technology Licensing, Llc|Presenting content using decoupled presentation resources| JP6258897B2|2015-07-01|2018-01-10|シャープ株式会社|Content acquisition device, content acquisition method, metadata distribution device, and metadata distribution method| US9736730B2|2015-11-05|2017-08-15|At&T Intellectual Property I, L.P.|Wireless video download rate optimization| RU2610686C1|2015-11-17|2017-02-14|федеральное государственное бюджетное образовательное учреждение высшего образования "Рязанский государственный университет имени С.А. Есенина"|Method for adaptive transmission of information via communication channel in real time and system for its implementation| US9880780B2|2015-11-30|2018-01-30|Samsung Electronics Co., Ltd.|Enhanced multi-stream operations| US9898202B2|2015-11-30|2018-02-20|Samsung Electronics Co., Ltd.|Enhanced multi-streaming though statistical analysis| US10063422B1|2015-12-29|2018-08-28|Amazon Technologies, Inc.|Controlled bandwidth expansion in compressed disaggregated storage systems| MX2018009876A|2016-02-16|2018-11-09|Nokia Technologies Oy|Media encapsulating and decapsulating.| BR112018016069A2|2016-03-08|2019-01-02|Ipcom Gmbh & Co Kg|method for determining a transmission time interval duration, radio access network node, core entity, and user equipment device| US10750217B2|2016-03-21|2020-08-18|Lg Electronics Inc.|Broadcast signal transmitting/receiving device and method| US10075292B2|2016-03-30|2018-09-11|Divx, Llc|Systems and methods for quick start-up of playback| US11038938B2|2016-04-25|2021-06-15|Time Warner Cable Enterprises Llc|Methods and apparatus for providing alternative content| US11269951B2|2016-05-12|2022-03-08|Dolby International Ab|Indexing variable bit stream audio formats| US10129574B2|2016-05-24|2018-11-13|Divx, Llc|Systems and methods for providing variable speeds in a trick-play mode| US10231001B2|2016-05-24|2019-03-12|Divx, Llc|Systems and methods for providing audio content during trick-play playback| US10148989B2|2016-06-15|2018-12-04|Divx, Llc|Systems and methods for encoding video content| US10812558B1|2016-06-27|2020-10-20|Amazon Technologies, Inc.|Controller to synchronize encoding of streaming content| US10652625B1|2016-06-27|2020-05-12|Amazon Technologies, Inc.|Synchronization of multiple encoders for streaming content| US10652292B1|2016-06-28|2020-05-12|Amazon Technologies, Inc.|Synchronization of multiple encoders for streaming content| US10389785B2|2016-07-17|2019-08-20|Wei-Chung Chang|Method for adaptively streaming an audio/visual material| US10476943B2|2016-12-30|2019-11-12|Facebook, Inc.|Customizing manifest file for enhancing media streaming| US10440085B2|2016-12-30|2019-10-08|Facebook, Inc.|Effectively fetch media content for enhancing media streaming| CN107846605B|2017-01-19|2020-09-04|湖南快乐阳光互动娱乐传媒有限公司|System and method for generating streaming media data of anchor terminal, and system and method for live network broadcast| US10498795B2|2017-02-17|2019-12-03|Divx, Llc|Systems and methods for adaptive switching between multiple content delivery networks during adaptive bitrate streaming| US10652166B2|2017-06-27|2020-05-12|Cisco Technology, Inc.|Non-real time adaptive bitrate recording scheduler| FR3070566B1|2017-08-30|2020-09-04|Sagemcom Broadband Sas|PROCESS FOR RECOVERING A TARGET FILE OF AN OPERATING SOFTWARE AND DEVICE FOR USE| US10681104B1|2017-09-13|2020-06-09|Amazon Technologies, Inc.|Handling media timeline offsets| US10609189B2|2018-02-19|2020-03-31|Verizon Digital Media Services Inc.|Seamless stream failover with distributed manifest generation| US10891100B2|2018-04-11|2021-01-12|Matthew Cohn|System and method for capturing and accessing real-time audio and associated metadata| US10638180B1|2018-07-20|2020-04-28|Amazon Technologies, Inc.|Media timeline management| CN109840371B|2019-01-23|2020-09-08|北京航空航天大学|Dynamic multilayer coupling network construction method based on time sequence| WO2021012051A1|2019-07-23|2021-01-28|Lazar Entertainment Inc.|Live media content delivery systems and methods|
法律状态:
2019-01-08| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-02-27| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-03-10| B15K| Others concerning applications: alteration of classification|Free format text: AS CLASSIFICACOES ANTERIORES ERAM: H04L 29/06 , H04N 7/24 Ipc: H04L 29/06 (2006.01), H04N 7/24 (2011.01), H04N 21 | 2021-03-09| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-05-18| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 22/09/2010, OBSERVADAS AS CONDICOES LEGAIS. PATENTE CONCEDIDA CONFORME ADI 5.529/DF |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US24476709P| true| 2009-09-22|2009-09-22| US25771909P| true| 2009-11-03|2009-11-03| US61/257,719|2009-11-03| US25808809P| true| 2009-11-04|2009-11-04| US61/258,088|2009-11-04| US28577909P| true| 2009-12-11|2009-12-11| US61/285,779|2009-12-11| US29672510P| true| 2010-01-20|2010-01-20| US61/296,725|2010-01-20| US37239910P| true| 2010-08-10|2010-08-10| US61/372,399|2010-08-10| US12/887,480|US20110096828A1|2009-09-22|2010-09-21|Enhanced block-request streaming using scalable encoding| PCT/US2010/049852|WO2011038021A1|2009-09-22|2010-09-22|Enhanced block-request streaming using scalable encoding| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|